Artificial Intelligence

Wrick Talukdar

Author: Wrick Talukdar

Whiteboard to cloud in minutes using Amazon Q, Amazon Bedrock Data Automation, and Model Context Protocol

We’re excited to share the Amazon Bedrock Data Automation Model Context Protocol (MCP) server, for seamless integration between Amazon Q and your enterprise data. In this post, you will learn how to use the Amazon Bedrock Data Automation MCP server to securely integrate with AWS Services, use Bedrock Data Automation operations as callable MCP tools, and build a conversational development experience with Amazon Q.

Autonomous mortgage processing using Amazon Bedrock Data Automation and Amazon Bedrock Agents

In this post, we introduce agentic automatic mortgage approval, a next-generation sample solution that uses autonomous AI agents powered by Amazon Bedrock Agents and Amazon Bedrock Data Automation. These agents orchestrate the entire mortgage approval process—intelligently verifying documents, assessing risk, and making data-driven decisions with minimal human intervention.

Unleashing the multimodal power of Amazon Bedrock Data Automation to transform unstructured data into actionable insights

Today, we’re excited to announce the general availability of Amazon Bedrock Data Automation, a powerful, fully managed capability within Amazon Bedrock that seamlessly transforms unstructured multimodal data into structured, application-ready insights with high accuracy, cost efficiency, and scalability.

Solution architecture

Elevate healthcare interaction and documentation with Amazon Bedrock and Amazon Transcribe using Live Meeting Assistant

Today, physicians spend about 49% of their workday documenting clinical visits, which impacts physician productivity and patient care. Did you know that for every eight hours that office-based physicians have scheduled with patients, they spend more than five hours in the EHR? As a consequence, healthcare practitioners exhibit a pronounced inclination towards conversational intelligence solutions, […]

Build trust and safety for generative AI applications with Amazon Comprehend and LangChain

We are witnessing a rapid increase in the adoption of large language models (LLM) that power generative AI applications across industries. LLMs are capable of a variety of tasks, such as generating creative content, answering inquiries via chatbots, generating code, and more. Organizations looking to use LLMs to power their applications are increasingly wary about data privacy to ensure trust and safety is maintained within their generative AI applications. This includes handling customers’ personally identifiable information (PII) data properly. It also includes preventing abusive and unsafe content from being propagated to LLMs and checking that data generated by LLMs follows the same principles. In this post, we discuss new features powered by Amazon Comprehend that enable seamless integration to ensure data privacy, content safety, and prompt safety in new and existing generative AI applications.

Announcing support for extracting data from identity documents using Amazon Textract

Creating efficiencies in your business is at the top of your list. You want your employees to be more productive, have them focus on high impact tasks, or find ways to implement better processes to improve the outcomes to your customers. There are various ways to solve this problem, and more companies are turning to […]