Skip to main content
2025

Streamlining LLM adaptation using Amazon Bedrock with JetBrains

Learn how software provider JetBrains builds developer tools using Amazon Bedrock.

Missing alt text value
Using Amazon Bedrock, we’re able to experiment across different LLMs, which before was nearly impossible for us because it took so much time.

Vladislav Tankov

Director of AI, JetBrains

Overview

On a mission to support developers, JetBrains is always looking for ways to streamline how developers work in the age of artificial intelligence (AI). The company wanted to adapt new large language models (LLMs) in its AI assistant plugin for its integrated development environments.

JetBrains has been using Amazon Web Services (AWS) since 2013, and it had already used AWS services for its JetBrains AI platform, a set of products that includes tools for code completion, context-aware chats, and agent workflows. By adapting LLMs through Amazon Bedrock, a comprehensive, secure, and flexible service for building generative AI applications and agents, JetBrains could reduce the adaptation time for LLMs and experiment with new models simply and quickly—and even within 1 day. Additionally, using Amazon Bedrock helps JetBrains maintain compliance across regions, facilitating product deployment in new markets.

About JetBrains

Founded in 2000, JetBrains specializes in the creation of intelligent development tools that help developers be more productive. Its products are used by more than 300,000 organizations worldwide.

Opportunity | Using Amazon Bedrock to streamline LLM adaptation for JetBrains

JetBrains works with more than 11.4 million software developers worldwide. Used by over 300,000 businesses, its products include IntelliJ IDEA, PyCharm, WebStorm, and more. To stay at the forefront of rapidly changing technology, the company is constantly adding new features and capabilities to its products.

“Since the start, we’ve created tools for software developers that help them do their jobs more productively and that make their jobs more enjoyable,” says Vladislav Tankov, director of AI at JetBrains. “It became obvious to us that we needed to support additional LLM providers.”

To support new LLMs, JetBrains would need to work with individual LLM providers to understand the way they operate, including the way they handle data and the throughput support that they provide. However, JetBrains had another option: Amazon Bedrock, which could provide both the capability for JetBrains to connect with LLM providers and support for hosting open-source models.

For JetBrains, Amazon Bedrock greatly simplified the process of adding new LLMs. “It’s simpler using Amazon Bedrock because it provides you with a unified interface so that you don’t have to initiate communications with each new LLM provider,” says Tankov. “You’re just talking to one team—AWS. And you know if the folks from AWS tell you something, you can trust that they will be doing it.”

Solution | Evaluating dozens of LLMs to help developers succeed

Because JetBrains has trusted AWS with its infrastructure for more than a decade, its team felt prepared to integrate Amazon Bedrock into its solution. “It was simple and straightforward, and that was one of the things that we really liked about Amazon Bedrock,” says Tankov. Because of that ease of implementation, JetBrains only needed to allocate a single person to the integration. Within 1 month, all testing had been completed, and the integration was complete.

Using Amazon Bedrock, JetBrains can adapt new LLMs with ease. That makes it possible for the company to test new LLMs to determine the best-fitting model for each application. For example, JetBrains uses Anthropic’s Claude in Amazon Bedrock for code-related tasks. “Using Amazon Bedrock, we’re able to experiment across different LLMs, which before was nearly impossible for us because it took so much time,” says Tankov. “Now, we have the flexibility and ability to evaluate dozens of LLMs to see which performs better on different benchmarks.”

Similarly, JetBrains offers developers the ability to choose their preferred LLMs. “We are building our AI vision around the choice that we offer to our developers,” says Tankov. “Developers need choice because the industry is changing really quickly. Different LLMs target different sectors, even within software development.” Importantly, developers can even choose not to use AI. “We do see a number of software developers who want a more standard experience, and we provide that as well,” says Tankov.

Using new LLMs through Amazon Bedrock has greatly accelerated JetBrains’s model adaptation timeline. For JetBrains to adapt DeepSeek on its own, for example, the company estimated it would have taken several weeks. Using DeepSeek in Amazon Bedrock, JetBrains was able to adapt the LLM in just 1 day.

The global presence of Amazon Bedrock means that users can consistently access AI services while complying with data residency requirements. “Using Amazon Bedrock really helps because we don’t have to recertify the whole solution in different regions,” says Tankov. “We know that AWS takes regulatory compliance seriously.” Additionally, using Amazon Bedrock for open-source LLMs makes it possible to integrate guardrails so that JetBrains can be confident that the LLM is delivered in a safe manner.

And because Amazon Bedrock operates across 17 regions worldwide, JetBrains is able to use cross-region inferencing to distribute workloads, helping to minimize latency even in times of peak demand. “If we ran inference on our own, we would need to solve scaling,” says Tankov. “Basically, Amazon Bedrock solves this problem for us.”

Outcome | Staying at the forefront of rapidly changing technology

Using Amazon Bedrock, JetBrains not only has flexibility to adapt existing LLMs but can also deliver its own open-source LLMs to Amazon Bedrock. “The fact that we can use Amazon Bedrock to deliver our products so that other folks will be able to use them is extremely important for us,” says Tankov. JetBrains uses NVIDIA GPUs on AWS to train its LLMs, including NVIDIA B200 and NVIDIA H200. The company trains its Mellum model with over 4 billion parameters and appreciates the scalability that AWS provides. Moreover, JetBrains is the first AWS customer to use the new Amazon EC2 P6 instances featuring NVIDIA Blackwell GPUs, reflecting its expertise in the use of advanced GPU infrastructure. JetBrains has also launched its Mellum solution on the Amazon Bedrock Marketplace, making its in-house model available to developers worldwide. “AWS is providing top-notch infrastructure, but it can also adapt to our business model,” says Tankov. “That’s a really great thing.”

Going forward, JetBrains will use AWS to expand its use of agentic workflows. The company is working to integrate Amazon Q Developer, the most capable generative AI–powered assistant for software development, into its flagship products. It also plans to create standard frameworks for developers to create their own agents.

Adapting Amazon Bedrock has met a crucial need for JetBrains. “The fact that Amazon Bedrock covers a lot of the legislative, technological, and organizational concerns, and delivers a trusted and stable solution, is very valuable,” says Tankov. “That’s the biggest part.”

Get Started

Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today.
Contact Sales