AWS Security Blog
Category: Amazon Bedrock
Use an Amazon Bedrock powered chatbot with Amazon Security Lake to help investigate incidents
In part 2 of this series, we showed you how to use Amazon SageMaker Studio notebooks with natural language input to assist with threat hunting. This is done by using SageMaker Studio to automatically generate and run SQL queries on Amazon Athena with Amazon Bedrock and Amazon Security Lake. The Security Lake service team and […]
Announcing AWS Security Reference Architecture Code Examples for Generative AI
Amazon Web Services (AWS) is pleased to announce the release of new Security Reference Architecture (SRA) code examples for securing generative AI workloads. The examples include two comprehensive capabilities focusing on secure model inference and RAG implementations, covering a wide range of security best practices using AWS generative AI services. These new code examples are […]
Implementing least privilege access for Amazon Bedrock
April 9, 2025: We updated content about Amazon Bedrock Guardrails to cover the recently added condition key bedrock:GuardrailIdentifier. March 27, 2025: Two policies in this post were updated. Generative AI applications often involve a combination of various services and features—such as Amazon Bedrock and large language models (LLMs)—to generate content and to access potentially confidential […]
Implement effective data authorization mechanisms to secure your data used in generative AI applications – part 2
In part 1 of this blog series, we walked through the risks associated with using sensitive data as part of your generative AI application. This overview provided a baseline of the challenges of using sensitive data with a non-deterministic large language model (LLM) and how to mitigate these challenges with Amazon Bedrock Agents. The next […]
Securing the RAG ingestion pipeline: Filtering mechanisms
Retrieval-Augmented Generative (RAG) applications enhance the responses retrieved from large language models (LLMs) by integrating external data such as downloaded files, web scrapings, and user-contributed data pools. This integration improves the models’ performance by adding relevant context to the prompt. While RAG applications are a powerful way to dynamically add additional context to an LLM’s prompt […]
Implement effective data authorization mechanisms to secure your data used in generative AI applications – part 1
April 3, 2025: We’ve updated this post to reflect the new 2025 OWASP top 10 for LLM entries. This is part 1 of a two-part blog series. See part 2. Data security and data authorization, as distinct from user authorization, is a critical component of business workload architectures. Its importance has grown with the evolution […]
Enhancing data privacy with layered authorization for Amazon Bedrock Agents
April 3, 2025: We’ve updated this post to reflect the new 2025 OWASP top 10 for LLM entries. Customers are finding several advantages to using generative AI within their applications. However, using generative AI adds new considerations when reviewing the threat model of an application, whether you’re using it to improve the customer experience for […]
Network perimeter security protections for generative AI
Generative AI–based applications have grown in popularity in the last couple of years. Applications built with large language models (LLMs) have the potential to increase the value companies bring to their customers. In this blog post, we dive deep into network perimeter protection for generative AI applications. We’ll walk through the different areas of network […]
Hardening the RAG chatbot architecture powered by Amazon Bedrock: Blueprint for secure design and anti-pattern mitigation
Mitigate risks like data exposure, model exploits, and ethical lapses when deploying Amazon Bedrock chatbots. Implement guardrails, encryption, access controls, and governance frameworks.
Context window overflow: Breaking the barrier
Have you ever pondered the intricate workings of generative artificial intelligence (AI) models, especially how they process and generate responses? At the heart of this fascinating process lies the context window, a critical element determining the amount of information an AI model can handle at a given time. But what happens when you exceed the […]