by Institute for AI Policy and Strategy
The Institute for AI Policy & Strategy (IAPS) works to reduce risks related to the development & deployment of frontier AI systems. We focus on AI regulations, compute governance, international governance & China, and lab governance. This feed contains audio versions of some of our outputs. Learn more and read our work at iaps.ai.
Language
🇺🇲
Publishing Since
9/26/2023
Email Addresses
1 available
Phone Numbers
0 available
November 6, 2023
<p class="c5 c40">Events that bring together stakeholders from a range of countries to talk about AI safety (henceforth "safety dialogues") are a promising way to reduce large-scale risks from advanced AI systems. The goal of this report is to help safety dialogue organizers make these events as effective as possible at reducing such risks. We first identify “best practices” for organizers, drawing on research about comparable past events, literature about track II diplomacy, and our experience with international relations topics in AI governance. We then identify harmful outcomes that might result from safety dialogues, and ideas for how organizers can avoid them. Finally, we overview promising AI safety interventions that have already been identified and that might be particularly fruitful to discuss during a safety dialogue.</p> <p>---</p><p><strong>Outline:</strong></p><p>(02:21) Best practices for organizers</p><p>(07:52) Harmful outcomes to avoid</p><p>(11:04) Interventions to discuss at safety dialogues</p><p>(13:03) 1. Introduction</p><p>(17:31) 2. Best practices for organizers</p><p>(17:58) Method for identifying recommendations</p><p>(21:43) “Best practice” recommendations</p><p>(22:26) Culture of the safety dialogues</p><p>(22:43) Make the dialogue non-partisan</p><p>(24:20) Promote a spirit of collaborative truth-seeking</p><p>(27:52) Create high-trust relationships between the participants</p><p>(29:32) Create high-trust relationships between the participants and facilitators</p><p>(30:30) Communicating about safety dialogues to outsiders</p><p>(30:35) Maintain confidentiality about what was said by whom</p><p>(31:26) Consider maintaining confidentiality about who is attending</p><p>(32:49) Consider publishing a readout after the dialogue</p><p>(34:55) Content of the event</p><p>(34:59) Provide inputs to encourage participants down a productive path</p><p>(36:32) Sometimes split participants into working groups</p><p>(37:20) Selecting participants to invite</p><p>(37:24) Choose participants who will engage constructively</p><p>(38:47) Consider including participants from a range of countries</p><p>(40:40) Consider the right level of participant “turnover” between dialogues</p><p>(41:30) Logistical details</p><p>(41:34) Choose a suitable location</p><p>(42:58) Reduce language barriers</p><p>(44:08) 3. Harmful outcomes to avoid</p><p>(44:45) Promoting interest in AI capabilities disproportionately, relative to AI safety</p><p>(47:16) Reducing the influence of safety concerns</p><p>(50:53) Diffusing AI capabilities insights</p><p>(54:12) 4. Interventions to discuss at safety dialogues</p><p>(56:07) Overarching AI safety plan</p><p>(56:25) Components of the plan</p><p>(59:20) Role for safety dialogues in the overarching plan</p><p>(01:01:11) Best practices for AI labs</p><p>(01:03:59) Best practices for other relevant actors</p><p>(01:05:51) Acknowledgements</p><p>(01:06:07) Appendix: Additional detail on the “strand 1” case studies</p><p>(01:06:13) Cases that we selected</p><p>(01:08:18) Cases that we did not select</p><p><i>The original text contained 91 footnotes which were omitted from this narration.</i> </p><p>---</p> <p><b>First published:</b><br/> October 31st, 2023 </p> <p><b>Source:</b><br/> <a href="https://www.iaps.ai/research/international-ai-safety-dialogues?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://www.iaps.ai/research/international-ai-safety-dialogues</a> </p>
October 19, 2023
<p>The complex and evolving threat landscape of frontier AI development requires a multi-layered approach to risk management (“defense-in-depth”). By reviewing cybersecurity and AI frameworks, we outline three approaches that can help identify gaps in the management of AI-related risks.</p> <p>First, a functional approach identifies essential categories of activities (“functions”) that a risk management approach should cover, as in the NIST Cybersecurity Framework (CSF) and AI Risk Management Framework (AI RMF).</p><p> Second, a lifecycle approach instead assigns safety and security activities across the model development lifecycle, as in DevSecOps and the OECD AI lifecycle framework.</p><p> Third, a threat-based approach identifies tactics, techniques, and procedures (TTPs) used by malicious actors, as in the MITRE ATT&CK and MITRE ATLAS databases.</p><p>We recommend that frontier AI developers and policymakers begin by adopting the functional approach, given the existence of the NIST AI RMF and other supplementary guides, but also establish a detailed frontier AI lifecycle model and threat-based TTP databases for future use.</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:18) Executive Summary</p><p>(09:23) 1 | Introduction</p><p>(11:34) 2 | Defense-in-depth for frontier AI systems</p><p>(12:07) 2.1 | Commonalities between domains implementing defense-in-depth</p><p>(16:30) 2.2 | Defense-in-depth in nuclear power</p><p>(20:20) 2.3 | Cybersecurity as a model for AI</p><p>(20:25) 2.3.1 | Cybersecurity defense-in-depth in the 2000s and beyond</p><p>(22:26) 2.3.2 | Complementary approaches to address evolving capabilities and threats</p><p>(27:59) 2.3.3 | Benchmarking measures to the appropriate level of risk</p><p>(30:55) 2.4 | Three approaches to AI defense-in-depth</p><p>(35:05) 3 | Functional approach</p><p>(37:44) 3.1 | What does this look like in cybersecurity?</p><p>(40:52) 3.2 | Why take a functional approach?</p><p>(42:00) 3.3 | Usage for frontier AI governance</p><p>(42:54) 3.3.1 | The NIST AI RMF</p><p>(44:30) 3.3.2 | Tailoring the AI RMF to frontier AI safety and security concerns</p><p>(48:36) 3.3.3 | Providing detailed controls</p><p>(51:06) 3.3.4 | Defense-in-depth using the NIST AI RMF</p><p>(54:00) 3.4 | Limitations and future work</p><p>(55:37) 4 | Lifecycle approach</p><p>(57:32) 4.1 | What does this look like in cybersecurity?</p><p>(58:24) 4.1.1 | Security Development Lifecycle (SDL) framework</p><p>(01:00:12) 4.1.2 | The DevSecOps framework</p><p>(01:02:02) 4.2 | Why take a lifecycle approach?</p><p>(01:04:40) 4.3 | Usage for frontier AI governance</p><p>(01:05:04) 4.3.1 | Existing descriptions of the AI development lifecycle</p><p>(01:08:55) 4.3.2 | Proposed lifecycle framework</p><p>(01:12:10) 4.3.3 | Discussion of proposed framework</p><p>(01:12:15) “Shifting left” on AI safety and security</p><p>(01:17:55) Deployment and post-deployment measures</p><p>(01:19:22) 4.4 | Limitations and future work</p><p>(01:21:29) 5 | Threat-based approach</p><p>(01:23:27) 5.1 | What does this look like in cybersecurity?</p><p>(01:26:11) 5.1.1 | An alternative threat-based approach: the kill chain</p><p>(01:27:41) 5.2 | Why take a threat-based approach?</p><p>(01:30:29) 5.3 | Usage for frontier AI governance</p><p>(01:30:34) 5.3.1 | Existing work</p><p>(01:34:05) 5.3.2 | Proposed threat-based approaches</p><p>(01:35:24) An “effect on model” approach</p><p>(01:37:21) An “effect on world” approach</p><p>(01:40:15) 5.3.3 | Application to national critical functions</p><p>(01:43:38) 5.4 | Limitations and future work</p><p>(01:46:21) 6 | Evaluating and applying the suggested frameworks</p><p>(01:46:34) 6.1 | Context for applying frameworks</p><p>(01:48:56) 6.2 | Application to existing measures</p><p>(01:51:59) 6.2.1 | Functional</p><p>(01:56:13) 6.2.2 | Lifecycle</p><p>(01:58:12) 7 | Conclusion</p><p>(01:58:37) 7.1 | Overview of Next Steps</p><p>(02:00:29) 7.2 | Recommendations</p><p>(02:01:15) Acknowledgments</p><p>(02:02:50) Appendix A: Relevant frameworks in nuclear reactor safety and cybersecurity</p><p>(02:03:14) Appendix A-1: Defense-in-depth levels in nuclear reactor safety</p><p>(02:04:18) Appendix A-2: Relevant cybersecurity frameworks</p><p>(02:04:24) Defense-in-depth frameworks</p><p>(02:07:11) NIST SP 800-172: Defense-in-depth against advanced persistent threats</p><p>(02:10:06) Appendix A-3: The NIST Cybersecurity Framework (CSF)</p><p>(02:12:42) Common uses of the NIST CSF</p><p>(02:14:26) Appendix B: NIST AI Risk Management Framework</p><p>(02:15:20) Appendix B-1: Govern</p><p>(02:20:19) Appendix B-2: Map</p><p>(02:25:35) Appendix B-3: Measure</p><p>(02:31:04) Appendix B-4: Manage</p><p><i>The original text contained 123 footnotes which were omitted from this narration.</i> </p><p>---</p> <p><b>First published:</b><br/> October 13th, 2023 </p> <p><b>Source:</b><br/> <a href="https://www.iaps.ai/research/adapting-cybersecurity-frameworks?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://www.iaps.ai/research/adapting-cybersecurity-frameworks</a> </p>
October 4, 2023
<p>This report examines the prospect of large-scale smuggling of AI chips into China. AI chip smuggling into China is already happening to a limited extent and may involve greater quantities in the future. This is because demand for AI chips is increasing in China, while the US has restricted exports of cutting-edge chips going there. First, we describe paths such smuggling could take and estimate how many AI chips would be smuggled if China-linked actors were to aim for large-scale smuggling regimes. Second, we outline factors that affect whether and when China-linked actors would aim at large-scale smuggling regimes. Third, we propose six measures for reducing the likelihood of large-scale smuggling.</p> <p>---</p><p><strong>Outline:</strong></p><p>(01:01) Short summary</p><p>(03:45) Longer summary</p><p>(17:51) How the US typically enforces export controls</p><p>(29:21) Pathways and feasibility of large-scale smuggling</p><p>(31:16) All-things-considered view</p><p>(34:29) Routes into China</p><p>(35:16) Summary table of potential reexport countries</p><p>(37:39) Feasibility of surreptitiously procuring AI chips for reexport</p><p>(38:27) Methods of obtaining AI chips</p><p>(43:53) Challenges of large-scale smuggling</p><p>(46:41) Four factors determining procurement feasibility</p><p>(47:25) Demand for AI chips</p><p>(53:01) Rule of law</p><p>(54:16) Geopolitical alignment</p><p>(58:12) Common language</p><p>(59:05) Feasibility of surreptitiously transporting AI chips to China</p><p>(01:00:20) Sea, land, and air transport</p><p>(01:02:40) Clearing customs</p><p>(01:04:27) Import/export volume</p><p>(01:06:21) China's sides of its borders</p><p>(01:07:47) Two possible smuggling regimes</p><p>(01:09:28) Summary tables of estimates</p><p>(01:12:45) Why the scenarios only concern Nvidia GPUs</p><p>(01:13:52) Regime 1: Many shell companies buy small quantities from distributors</p><p>(01:15:12) Enforcement of controls if this regime is attempted</p><p>(01:18:27) Estimate</p><p>(01:23:03) Regime 2: Few cloud provider fronts buy large quantities directly from Nvidia/OEMs</p><p>(01:25:50) Enforcement of controls if this regime is attempted</p><p>(01:29:31) Estimate</p><p>(01:33:42) Will China-linked actors aim for large-scale AI chip smuggling?</p><p>(01:35:08) AI chip smuggling today</p><p>(01:36:54) Drivers of AI chip smuggling</p><p>(01:44:24) Recommendations for US policymakers</p><p>(01:47:12) Chip registry</p><p>(01:53:13) Increasing BIS's budget</p><p>(01:56:45) Stronger due diligence requirements for chip exporters</p><p>(01:59:20) Licensing requirement for AI chip exports to key third countries</p><p>(02:01:48) Interagency program to secure the AI supply chain</p><p>(02:03:30) End-user verification programs in Southeast Asia</p><p>(02:05:36) Discussion</p><p>(02:06:25) Limitations</p><p>(02:11:56) Further research</p><p>(02:15:27) Acknowledgments</p><p>---</p> <p><b>First published:</b><br/> October 4th, 2023 </p> <p><b>Source:</b><br/> <a href="https://www.iaps.ai/research/ai-chip-smuggling-into-china?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://www.iaps.ai/research/ai-chip-smuggling-into-china</a> </p>
Pod Engine is not affiliated with, endorsed by, or officially connected with any of the podcasts displayed on this platform. We operate independently as a podcast discovery and analytics service.
All podcast artwork, thumbnails, and content displayed on this page are the property of their respective owners and are protected by applicable copyright laws. This includes, but is not limited to, podcast cover art, episode artwork, show descriptions, episode titles, transcripts, audio snippets, and any other content originating from the podcast creators or their licensors.
We display this content under fair use principles and/or implied license for the purpose of podcast discovery, information, and commentary. We make no claim of ownership over any podcast content, artwork, or related materials shown on this platform. All trademarks, service marks, and trade names are the property of their respective owners.
While we strive to ensure all content usage is properly authorized, if you are a rights holder and believe your content is being used inappropriately or without proper authorization, please contact us immediately at [email protected] for prompt review and appropriate action, which may include content removal or proper attribution.
By accessing and using this platform, you acknowledge and agree to respect all applicable copyright laws and intellectual property rights of content owners. Any unauthorized reproduction, distribution, or commercial use of the content displayed on this platform is strictly prohibited.