AWS Machine Learning Blog
Category: Artificial Intelligence
Optimize query responses with user feedback using Amazon Bedrock embedding and few-shot prompting
This post demonstrates how Amazon Bedrock, combined with a user feedback dataset and few-shot prompting, can refine responses for higher user satisfaction. By using Amazon Titan Text Embeddings v2, we demonstrate a statistically significant improvement in response quality, making it a valuable tool for applications seeking accurate and personalized responses.
Integrate Amazon Bedrock Agents with Slack
In this post, we present a solution to incorporate Amazon Bedrock Agents in your Slack workspace. We guide you through configuring a Slack workspace, deploying integration components in Amazon Web Services, and using this solution.
Secure distributed logging in scalable multi-account deployments using Amazon Bedrock and LangChain
In this post, we present a solution for securing distributed logging multi-account deployments using Amazon Bedrock and LangChain.
Build a domain‐aware data preprocessing pipeline: A multi‐agent collaboration approach
In this post, we introduce a multi-agent collaboration pipeline for processing unstructured insurance data using Amazon Bedrock, featuring specialized agents for classification, conversion, and metadata extraction. We demonstrate how this domain-aware approach transforms diverse data formats like claims documents, videos, and audio files into metadata-rich outputs that enable fraud detection, customer 360-degree views, and advanced analytics.
Automating complex document processing: How Onity Group built an intelligent solution using Amazon Bedrock
In this post, we explore how Onity Group, a financial services company specializing in mortgage servicing and origination, transformed their document processing capabilities using Amazon Bedrock and other AWS services. The solution helped Onity achieve a 50% reduction in document extraction costs while improving overall accuracy by 20% compared to their previous OCR and AI/ML solution.
HERE Technologies boosts developer productivity with new generative AI-powered coding assistant
HERE collaborated with the GenAIIC. Our joint mission was to create an intelligent AI coding assistant that could provide explanations and executable code solutions in response to users’ natural language queries. The requirement was to build a scalable system that could translate natural language questions into HTML code with embedded JavaScript, ready for immediate rendering as an interactive map that users can see on screen.
Detect hallucinations for RAG-based systems
This post walks you through how to create a basic hallucination detection system for RAG-based applications. We also weigh the pros and cons of different methods in terms of accuracy, precision, recall, and cost.
AWS machine learning supports Scuderia Ferrari HP pit stop analysis
Pit crews are trained to operate at optimum efficiency, although measuring their performance has been challenging, until now. In this post, we share how Amazon Web Services (AWS) is helping Scuderia Ferrari HP develop more accurate pit stop analysis techniques using machine learning (ML).
Accelerate edge AI development with SiMa.ai Edgematic with a seamless AWS integration
In this post, we demonstrate how to retrain and quantize a model using SageMaker AI and the SiMa.ai Palette software suite. The goal is to accurately detect individuals in environments where visibility and protective equipment detection are essential for compliance and safety.
How Apoidea Group enhances visual information extraction from banking documents with multimodal models using LLaMA-Factory on Amazon SageMaker HyperPod
Building on this foundation of specialized information extraction solutions and using the capabilities of SageMaker HyperPod, we collaborate with APOIDEA Group to explore the use of large vision language models (LVLMs) to further improve table structure recognition performance on banking and financial documents. In this post, we present our work and step-by-step code on fine-tuning the Qwen2-VL-7B-Instruct model using LLaMA-Factory on SageMaker HyperPod.