<p><a href="https://www.rsaconference.com/usa" target="_blank" rel="noopener">RSA Conference 2024</a> drew 650 speakers, 600 exhibitors, and thousands of security practitioners from across the globe to the Moscone Center in San Francisco, California from May 6 through 9.</p>
<p>The keynote lineup was diverse, with 33 presentations featuring speakers ranging from <em>WarGames</em> actor <a href="https://insight.scmagazineuk.com/rsac-broderick-recalls-how-war-games-impact-on-us-miliary-policy" target="_blank" rel="noopener">Matthew Broderick</a>, to public and private-sector luminaries such as Cybersecurity and Infrastructure Security Agency (CISA) Director <a href="https://www.rsaconference.com/experts/Jen%20Easterly" target="_blank" rel="noopener">Jen Easterly</a>, U.S. Secretary of State <a href="https://www.youtube.com/watch?v=kewgNe8q260" target="_blank" rel="noopener">Antony Blinken</a>, security technologist <a href="https://www.rsaconference.com/experts/bruce-schneier" target="_blank" rel="noopener">Bruce Schneier</a>, and cryptography experts <a href="https://www.rsaconference.com/experts/tal-rabin" target="_blank" rel="noopener">Tal Rabin</a>, <a href="https://www.rsaconference.com/experts/Dr%20Whitfield%20Diffie%20ForMemRS" target="_blank" rel="noopener">Whitfield Diffie</a>, and <a href="https://www.rsaconference.com/experts/adi-shamir" target="_blank" rel="noopener">Adi Shamir</a>.</p>
<p>Topics aligned with this year’s conference theme, “The art of possible,” and focused on actions we can take to revolutionize technology through innovation, while fortifying our defenses against an evolving threat landscape.</p>
<p>This post highlights three themes that caught our attention: artificial intelligence (AI) security, the Secure by Design approach to building products and services, and Chief Information Security Officer (CISO) collaboration.</p>
<h2>AI security</h2>
<p>Organizations in all industries have started building <a href="https://aws.amazon.com/what-is/generative-ai/" target="_blank" rel="noopener">generative AI</a> applications using large language models (LLMs) and other foundation models (FMs) to enhance customer experiences, transform operations, improve employee productivity, and create new revenue channels. So it’s not surprising that <a href="https://www.youtube.com/watch?v=QgXW8H4WldA" target="_blank" rel="noopener">AI</a> dominated conversations. Over 100 sessions touched on the topic, and the desire of attendees to understand AI technology and learn how to balance its risks and opportunities was clear.</p>
<table width="100%">
<tbody>
<tr>
<td width="100%"> <p>“Discussions of artificial intelligence often swirl with mysticism regarding how an AI system functions. The reality is far more simple: AI is a type of software system.” — <em>CISA</em></p></td>
</tr>
</tbody>
</table>
<p>FMs and the applications built around them are often used with highly sensitive business data such as personal data, compliance data, operational data, and financial information to optimize the model’s output. As we explore the advantages of generative AI, protecting highly sensitive data and investments is a top priority. However, many organizations aren’t paying enough <a href="https://www.theregister.com/2024/05/13/aws_ciso_ai_security/" target="_blank" rel="noopener">attention to security</a>. </p>
<p>A <a href="https://www.ibm.com/downloads/cas/2L73BYB4?mod=djemCybersecruityPro&tpl=cs" target="_blank" rel="noopener">joint generative AI security report</a> released by <a href="https://aws.amazon.com/" target="_blank" rel="noopener">Amazon Web Services (AWS)</a> and the IBM Institute for Business Value during the conference found that 82% of business leaders view secure and trustworthy AI as essential for their operations, but <a href="https://biztechmagazine.com/media/video/rsa-2024-state-artificial-intelligence-cybersecurity" target="_blank" rel="noopener">only 24%</a> are actively securing generative AI models and embedding security processes in AI development. In fact, nearly 70% say innovation takes precedence over security, despite concerns over threats and vulnerabilities (detailed in Figure 1).</p>
<div id="attachment_34418" class="wp-caption aligncenter">
<img aria-describedby="caption-attachment-34418" src="https://infracom.com.sg/wp-content/uploads/2024/05/img1-11.png" alt="Figure 1: Generative AI adoption concerns" width="780" class="size-full wp-image-34418">
<p id="caption-attachment-34418" class="wp-caption-text">Figure 1: Generative AI adoption concerns, <span>Source: IBM Security</span></p>
</div>
<p>Because data and model weights—the numerical values models learn and adjust as they train—are incredibly valuable, organizations need them to stay <a href="https://aws.amazon.com/blogs/machine-learning/a-secure-approach-to-generative-ai-with-aws/" target="_blank" rel="noopener">protected, secure, and private</a>, whether that means restricting access from an organization’s own administrators, customers, or <a href="https://aws.amazon.com/ai/generative-ai/security/" target="_blank" rel="noopener">cloud service provider</a>, or protecting data from vulnerabilities in software running in the organization’s own environment.</p>
<p>There is no silver AI-security bullet, but as the report points out, there are proactive steps you can take to start protecting your organization and leveraging AI technology to improve your security posture:</p>
<ol>
<li><strong>Establish a governance, risk, and compliance (GRC) foundation</strong>. Trust in gen AI starts with new security governance models (Figure 2) that integrate and embed GRC capabilities into your AI initiatives, and include policies, processes, and controls that are aligned with your business objectives.
<div id="attachment_34419" class="wp-caption aligncenter">
<img aria-describedby="caption-attachment-34419" src="https://infracom.com.sg/wp-content/uploads/2024/05/img2-10.png" alt="Figure 2: Updating governance, risk, and compliance models" width="740" class="size-full wp-image-34419">
<p id="caption-attachment-34419" class="wp-caption-text">Figure 2: Updating governance, risk, and compliance models, <span>Source: IBM Security</span></p>
</div> <p>In the RSA Conference session <a href="https://static.rainfocus.com/rsac/us24/sess/1695165210911001kWiH/finalwebsite/2024_USA24_IAIS-M02_01_AI-Law-Policy-and-Common-Sense-Suggestions-to-Stay-Out-of-Trouble_1714846426609001IqDJ.pdf" target="_blank" rel="noopener">AI: Law, Policy, and Common Sense Suggestions to Stay Out of Trouble</a>, digital commerce and gaming attorney <a href="https://www.rsaconference.com/experts/behnam-dayanim" target="_blank" rel="noopener">Behnam Dayanim</a> highlighted ethical, policy, and legal considerations—including <a href="https://www.asisonline.org/security-management-magazine/monthly-issues/security-technology/archive/2024/april/Understanding-the-EU-AI-Act/" target="_blank" rel="noopener">AI-specific regulations</a>—as well as governance structures such as the <a href="https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf" target="_blank" rel="noopener">National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF 1.0)</a> that can help maximize a successful implementation and minimize potential risk.</p> </li>
<li><strong>Strengthen your </strong><a href="https://aws.amazon.com/blogs/security/building-a-security-first-mindset-three-key-themes-from-aws-reinvent-2023/" target="_blank" rel="noopener"><strong>security culture</strong></a>. When we think of securing AI, it’s natural to focus on technical measures that can help protect the business. But organizations are made up of people—not technology. Educating employees at all levels of the organization can help avoid preventable harms such as prompt-based risks and unapproved tool use, and foster a <a href="https://aws.amazon.com/blogs/security/how-the-unique-culture-of-security-at-aws-makes-a-difference/" target="_blank" rel="noopener">resilient culture of cybersecurity</a> that supports effective risk mitigation, incident detection and response, and continuous collaboration.<br><table width="100%">
<tbody>
<tr>
<td width="100%">“You’ve got to understand early on that security can’t be effective if you’re running it like a project or a program. You really have to run it as an operational imperative—a core function of the business. That’s when magic can happen.” — <strong>Hart Rossman</strong>, <em>Global Services Security Vice President at AWS</em></td>
</tr>
</tbody>
</table> </li>
<li><strong>Engage with partners</strong>. Developing and securing AI solutions requires resources and skills that many organizations lack. Partners can provide you with comprehensive security support—whether that’s informing and advising you about generative AI, or augmenting your delivery and support capabilities. This can help make your engineers and your security controls more effective. <p>While many organizations purchase security products or solutions with embedded generative AI capabilities, nearly two-thirds, as detailed in Figure 3, report that their generative AI security capabilities come through some type of partner. </p>
<div id="attachment_34420" class="wp-caption aligncenter">
<img aria-describedby="caption-attachment-34420" src="https://infracom.com.sg/wp-content/uploads/2024/05/img3-10.png" alt="Figure 3: More than 90% of security gen AI capabilities are coming from third-party products or partners" width="740" class="size-full wp-image-34420">
<p id="caption-attachment-34420" class="wp-caption-text">Figure 3: Most security gen AI capabilities are coming from third-party products or partners, <span>Source: IBM Security</span></p>
</div> <p>Tens of thousands of customers are using AWS, for example, to experiment and move transformative generative AI applications into production. AWS provides <a href="https://aws.amazon.com/ai/services/" target="_blank" rel="noopener">AI-powered tools and services</a>, a <a href="https://aws.amazon.com/ai/generative-ai/innovation-center/" target="_blank" rel="noopener">Generative AI Innovation Center</a> program, and an extensive network of <a href="https://aws.amazon.com/ai/partners/?aws-marketplace-cards.sort-by=item.additionalFields.sortOrder&aws-marketplace-cards.sort-order=asc&awsf.aws-marketplace-aws-marketplace-aim=*all&awsf.aws-marketplace-aim=*all" target="_blank" rel="noopener">AWS partners</a> that have demonstrated expertise delivering <a href="https://aws.amazon.com/compare/the-difference-between-artificial-intelligence-and-machine-learning/#:~:text=ML%20is%20best%20for%20identifying,data%20to%20solve%20specific%20problems.&text=AI%20may%20use%20a%20wide,weights%20to%20train%20the%20model." target="_blank" rel="noopener">machine learning (ML)</a> and generative AI solutions. These resources can support your teams with hands-on help developing solutions mapped to your requirements, and a broader collection of knowledge they can use to help you make the nuanced decisions required for effective security.</p> </li>
</ol>
<p>View <a href="https://www.ibm.com/downloads/cas/2L73BYB4?mod=djemCybersecruityPro&tpl=cs" target="_blank" rel="noopener">the joint report</a> and AWS generative AI security <a href="https://aws.amazon.com/ai/generative-ai/security/" target="_blank" rel="noopener">resources</a> for additional guidance.</p>
<h2>Secure by Design</h2>
<p>Building secure software was a popular and related focus at the conference. Insecure design is ranked as the number four critical web application security concern on the <a href="https://owasp.org/Top10/" target="_blank" rel="noopener">Open Web Application Security Project (OWASP) Top 10</a>.</p>
<p>The concept known as <em>Secure by Design</em> is gaining importance in the effort to mitigate vulnerabilities early, minimize risks, and recognize security as a core business requirement. Secure by Design builds off of security models such as <a href="https://aws.amazon.com/executive-insights/content/zero-trust-charting-a-path-to-stronger-security/" target="_blank" rel="noopener">Zero Trust</a>, and aims to reduce the burden of cybersecurity and break the cycle of constantly creating and applying updates by developing products that are foundationally secure.</p>
<p>More than 60 technology companies—including AWS—signed CISA’s <a href="https://www.cisa.gov/securebydesign/pledge" target="_blank" rel="noopener">Secure by Design Pledge</a> during RSA Conference as part of a collaborative push to put security first when designing products and services.</p>
<p>The pledge demonstrates a commitment to making measurable progress towards seven goals within a year:</p>
<ul>
<li>Broaden the use of multi-factor authentication (MFA)</li>
<li>Reduce default passwords</li>
<li>Enable a significant reduction in the prevalence of one or more vulnerability classes</li>
<li>Increase the installation of security patches by customers</li>
<li>Publish a vulnerability disclosure policy (VDP)</li>
<li>Demonstrate transparency in vulnerability reporting</li>
<li>Strengthen the ability of customers to gather evidence of cybersecurity intrusions affecting products</li>
</ul>
<table width="100%">
<tbody>
<tr>
<td width="100%"> <p>“From day one, we have pioneered secure by design and secure by default practices in the cloud, so AWS is designed to be the most secure place for customers to run their workloads. We are committed to continuing to help organizations around the world elevate their security posture, and we look forward to collaborating with CISA and other stakeholders to further grow and promote security by design and default practices.” — <strong>Chris Betz</strong>, <em>CISO at AWS</em></p></td>
</tr>
</tbody>
</table>
<p>The need for security by design applies to AI like any other software system. To protect users and data, we need to build security into ML and AI with a Secure by Design approach that considers these technologies to be part of a larger software system, and weaves security into the AI pipeline.</p>
<p>Since models tend to have very high privileges and access to data, integrating an AI bill of materials (AI/ML BOM) and Cryptography Bill of Materials (CBOM) into BOM processes can help you catalog security-relevant information, and gain visibility into model components and data sources. Additionally, frameworks and standards such as the <a href="https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf" target="_blank" rel="noopener">AI RMF 1.0</a>, the <a href="https://hitrustalliance.net/press-releases/hitrust-releases-the-industrys-first-ai-assurance-program" target="_blank" rel="noopener">HITRUST AI Assurance Program</a>, and <a href="https://www.iso.org/standard/81230.html" target="_blank" rel="noopener">ISO/IEC 42001</a> can facilitate the incorporation of trustworthiness considerations into the design, development, and use of AI systems.</p>
<h2>CISO collaboration</h2>
<p>In the RSA Conference keynote session <a href="https://www.rsaconference.com/usa/agenda/session/CISO%20Confidential%20What%20Separates%20The%20Best%20From%20The%20Rest" target="_blank" rel="noopener">CISO Confidential: What Separates The Best From The Rest</a>, Trellix CEO Bryan Palma and CISO Harold Rivas noted that there are approximately 32,000 global CISOs today—4 times more than 10 years ago. The challenges they face include staffing shortages, liability concerns, and a rapidly evolving threat landscape. According to research conducted by the <a href="https://www.issa.org/new-research-from-techtargets-enterprise-strategy-group-and-the-issa-reveals-continuous-struggles-within-cybersecurity-professional-workforce/" target="_blank" rel="noopener">Information Systems Security Association (ISSA)</a>, nearly half of organizations (46%) report that their cybersecurity team is understaffed, and more than 80% of CISOs recently surveyed by <a href="https://www.trellix.com/solutions/mind-of-the-ciso-decoding-the-genai-impact/" target="_blank" rel="noopener">Trellix</a> have experienced an increase in cybersecurity threats over the past six months. When asked what would most improve their organizations’ abilities to defend against these threats, their top answer was <a href="https://aws.amazon.com/executive-insights/security/" target="_blank" rel="noopener">industry peers</a> sharing insights and best practices.</p>
<p>Building trusted relationships with peers and technology partners can help you gain the knowledge you need to effectively communicate the story of risk to your board of directors, keep up with technology, and <a href="https://aws.amazon.com/executive-insights/content/how-to-be-a-better-ciso/?executive-insights-cards.sort-by=item.additionalFields.sortDate&executive-insights-cards.sort-order=desc" target="_blank" rel="noopener">build success as a CISO</a>.</p>
<p><a href="https://www.youtube.com/watch?v=RB8oZIPs59o" target="_blank" rel="noopener">AWS CISO Circles</a> provide a forum for cybersecurity executives from organizations of all sizes and industries to share their challenges, insights, and best practices. CISOs come together in locations around the world to discuss the biggest security topics of the moment. With <a href="https://aws.amazon.com/executive-insights/security/#Find_your_community_with_AWS_CISO_Circles" target="_blank" rel="noopener">NDAs in place and the Chatham House Rule in effect</a>, security leaders can feel free to speak their minds, ask questions, and get feedback from peers through candid conversations facilitated by AWS Security leaders.</p>
<table width="100%">
<tbody>
<tr>
<td width="100%"> <p>“When it comes to security, community unlocks possibilities. CISO Circles give us an opportunity to deeply lean into CISOs’ concerns, and the topics that resonate with them. Chatham House Rule gives security leaders the confidence they need to speak openly and honestly with each other, and build a global community of knowledge-sharing and support.” — <strong>Clarke Rodgers</strong>, <em>Director of Enterprise Strategy at AWS</em></p></td>
</tr>
</tbody>
</table>
<p>At RSA Conference, CISO Circle attendees discussed the challenges of adopting generative AI. When asked whether CISOs or the business own generative AI risk for the organization, the consensus was that security can help with policies and recommendations, but the business should own the risk and decisions about how and when to use the technology. Some attendees noted that they took initial responsibility for generative AI risk, before transitioning ownership to an advisory board or committee comprised of leaders from their HR, legal, IT, finance, privacy, and compliance and ethics teams over time. Several CISOs expressed the belief that quickly taking ownership of generative AI risk before shepherding it to the right owner gave them a valuable opportunity to earn trust with their boards and executive peers, and to demonstrate business leadership during a time of uncertainty.</p>
<h2>Embrace the art of possible</h2>
<p>There are many more RSA Conference highlights on a wide range of additional topics, including <a href="https://www.rsaconference.com/usa/agenda/session/Cryptographers-Panel" target="_blank" rel="noopener">post-quantum cryptography developments</a>, <a href="https://static.rainfocus.com/rsac/us24/sess/1696015654575001PaYP/finalwebsite/2024_USA24_CLS-M01_01_Permissions-Centralized-Or-Decentralize-Both_1714575210942001Vq5D.pdf" target="_blank" rel="noopener">identity and access management</a>, <a href="https://static.rainfocus.com/rsac/us24/sess/1694707890613001pqPf/finalwebsite/2024_USA24_CLS-W01_01_Establishing-a-Data-Perimeter-on-AWS_1714575438508001Va6o.pdf" target="_blank" rel="noopener">data perimeters</a>, <a href="https://static.rainfocus.com/rsac/us24/sess/1705189871843001ux0Y/finalwebsite/2024_USA24_DAS-M02_01_Threat-Modeling-Redefined_1713884714294001yoAY.pdf" target="_blank" rel="noopener">threat modeling</a>, <a href="https://www.rsaconference.com/events/2024-usa/agenda/session/Your%20Cybersecurity%20Budget%20Is%20a%20Horses%20Behind" target="_blank" rel="noopener">cybersecurity budgets</a>, and <a href="https://static.rainfocus.com/rsac/us24/sess/1696786028317001EjC9/finalwebsite/2024_USA24_LAW-M05_01_The-Art-of-Cyber-Insurance-Whats-New-in-Coverage-and-Claims_1713916934937001AQ1m.pdf" target="_blank" rel="noopener">cyber insurance trends</a>. If there’s one key takeaway, it’s that we should never underestimate what is possible from threat actors or defenders. By harnessing AI’s potential while addressing its risks, building foundationally secure products and services, and developing meaningful collaboration, we can collectively strengthen security and establish cyber resilience.</p>
<p>Join us to learn more about cloud security in the age of generative AI at <a href="https://aws.amazon.com/blogs/security/explore-cloud-security-in-the-age-of-generative-ai-at-aws-reinforce-2024/" target="_blank" rel="noopener">AWS re:Inforce 2024</a> June 10–12 in Pennsylvania. <a href="https://register.reinforce.awsevents.com/?trk=direct" target="_blank" rel="noopener">Register today</a> with the code SECBLOfnakb to receive a limited time $150 USD discount, while supplies last.</p>
<p> <br>If you have feedback about this post, submit comments in the <strong>Comments</strong> section below. If you have questions about this post, <a href="https://console.aws.amazon.com/support/home" target="_blank" rel="noopener noreferrer">contact AWS Support</a>.</p>
<p><strong>Want more AWS Security news? Follow us on <a title="Twitter" href="https://twitter.com/AWSsecurityinfo" target="_blank" rel="noopener noreferrer">Twitter</a>.</strong></p>
<!-- '"` -->
Like this:
Like Loading...
You must be logged in to post a comment.