Overview
CloudBeaver is a new universal interface for data management developed by the DBeaver team. CloudBeaver is especially adapted for AWS Cloud services. This is the light web-application that you can share among all AWS users within your company. CloudBeaver allows:
- view and edit data and metadata of your databases
- export data from tables
- run SQL-queries for SQL and NoSQL databases
- view ER-diagrams for database objects and export them. Out-of-the-box CloudBeaver supports: AWS RDS (PostgreSQL, MySQL, Oracle, SQL Server), AWS Redshift, Aurora, Athena, DynamoDB, DocumentDB and Keyspaces. You can also create connections to your custom databases. Tens drivers are already included.
Highlights
- CloudBeaver works easily with your databases in AWS. In a few clicks you can setup a CloudBeaver server with connections to all your AWS and third-party databases. These connections are available for all users in your company and consider AWS permissions.
- CloudBeaver shows data from SQL and NoSQL databases as tables or in JSON view. For experienced users CloudBeaver suggests the advanced SQL-editor with syntax highlighting and auto-suggestion.
- You can look at the structure of your database on ER-diagrams. ER-diagrams are available for databases, schemas and tables.
Details
Introducing multi-product solutions
You can now purchase comprehensive solutions tailored to use cases and industries.
Features and programs
Financing for AWS Marketplace purchases
Pricing
Free trial
Dimension | Cost/hour |
|---|---|
t3.large Recommended | $1.50 |
t2.micro | $0.20 |
m5.4xlarge | $8.60 |
m4.large | $1.50 |
m5.large | $1.50 |
t3.medium | $0.60 |
t2.medium | $0.60 |
m5.xlarge | $2.80 |
t2.large | $1.50 |
m5.2xlarge | $4.60 |
Vendor refund policy
Refund within 30 days
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
64-bit (x86) Amazon Machine Image (AMI)
Amazon Machine Image (AMI)
An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.
Version release notes
Changes since 25.3: Security:
- Enforced complete logout and screen data clearance upon session expiration.
- Fixed the high vulnerability (CVE-2026-25639) in the axios library. The library was updated to version 1.13.5.
- Fixed inefficient regular expressions in TypeScript files to prevent potential Denial of Service (DoS) attacks and performance degradation.
- Improved security for deployments by restricting network access to internal services.
- Removed default insecure passwords from example deployment configurations.
Administration:
- Added the Audit log panel to the Administration part. When enabled, the panel shows various user actions, including authentication, connection management, configuration changes, and file operations.
- Added support for mapping users to CloudBeaver teams based on LDAP memberOf group membership.
- The CLOUDBEAVER_PUBLIC_URL variable was removed from all .env files and is no longer available for use.
- Added the ability to store the application workspace in a S3-compatible object storage. To configure, add new variables to the .env file.
- Added the Tech Support button in the bottom left corner of the Administration part for faster communication with DBeaver tech support.
- Changed the User list settings in the Administration part to show both active and inactive users by default.
AI Assistant:
- Added the ability to configure the MCP Server. The service can now be enabled in the Server Configuration section and configured via the AI section of the Connection page by administrators. Ready-to-use client configuration snippets for VS Code, Claude Code, Zed, and Cursor are available to simplify connection setup.
- Added "Enable metrics" option to AI Chat settings. When enabled, the chat displays token counts for user messages and AI responses across all conversation history. The setting remains disabled by default. Provided transparency into AI token consumption for usage awareness and cost management.
- Added the ability to change the default language for responses from an AI engine. This setting can be configured in the AI Settings section in the Administration part.
- Embedding model and dimension settings are now configurable in the AI Settings section of the Administration part for the Azure OpenAI, OpenAI, and GitHub Copilot agents.
- Added the SQL code highlighting for the auto-generated messages in the AI Chat. Different conversation types are now marked with specific icons in the AI Chat.
SQL Editor:
- Added support for parameters and variables in queries. This feature allows queries to be reused by changing parameters at execution time. Enabled by default and configurable in personal preferences.
- Added SQL preview to the Bind parameters/variables dialog to review queries with changed values on the fly.
- Enabled Tab key for autocompletion in the SQL Editor alongside the Enter key.
- Added a new setting in the SQL Editor to highlight spaces, tabs, and other whitespace characters to help users read, debug, and maintain their scripts. It is turned off by default and can be configured in personal preferences.
- Dangerous query confirmation is now shown for all DROP statements, not just for tables.
Data Editor:
- Added ability to automatically generate INSERT, SELECT, DELETE, and UPDATE statements for the selected values.
- Added undo and redo functionality for cell edits, row operations, and other data modifications. Retains the last 50 actions across the Data Editor, result sets, and related panels.
- Added "Use local formatting" setting. Users can choose how to display numbers and dates: using the OS locale, a custom locale, or keeping values unformatted. This formatting applies only to displayed values. Data in the database remains unchanged.
- Added column pinning to keep key columns (e.g., IDs, names) visible while horizontal scrolling through wide tables.
- Added status indicator icon in the top-left corner with tooltips explaining table editability. Indicates presence of primary keys, read-only connection settings, or read-only columns.
- Added shortcut Ctrl/Cmd + . to cancel operations in Data Editor.
- Fixed application freeze in canceling fetch size requests for large tables.
- Fixed an issue where large JSON columns displayed incorrect values after formatting the content in the Data Editor.
Navigator tree:
- Added the ability to duplicate connection configuration in the project navigation tree. The "Clone connection" feature is available in the context menu.
- Added the ability for users to configure the Simple or Advanced view in the Navigation tree for all connections or for each connection separately.
- Added the ability to show table objects, such as columns or keys, in the Navigation tree. The setting is disabled by default and can be turned on in the Navigator settings panel.
- Added the ability to rename connections via context menu in the Navigation Tree.
- Added Connection Info tab to display basic information about the current connection for all users.
General:
- Added support for long polling as a fallback when WebSockets are unavailable or blocked. Ensures reliable communication for SSO login, metadata updates, and SQL execution.
- Extended browser support to versions up to three years old.
- Added the ability to filter by IP address for the Query Manager/Query History.
- Redesigned the connection configuration page. Reorganized form fields and sections to provide more input space and reduce visual clutter.
- Expanded pointer target areas for icons in the Navigator, editors, and tabs according to the accessibility standards.
- Fixed a keyboard navigation issue for panels to keep the focus inside.
- Renamed "Database Native" authentication type to "Username/password" in the connection dialog.
Databases and drivers:
- Added firstRow and rowCount properties to CSV, XLSX, XML, JSON, and Parquet drivers to allow reading specific data ranges from source files. These settings can be configured via connection properties or the table DDL WITH clause.
- Athena: Driver was updated to version 3.27.0.
- Azure Synapse Dedicated SQL Pool and Microsoft Fabric Warehouse: Fixed the display of stored procedure definitions.
- ClickHouse: -- Updated driver to version 0.9.5 -- Added spatial data support -- Fixed an issue with displaying arrays of UUID, IPv4/IPv6, and Map types
- Databricks: -- Updated Driver version to 3.1.1 -- Added spatial data visualization -- Added machine-to-machine (M2M) authentication support for the Databricks connector, enabling secure, automated access to Databricks workspaces without interactive login
- Db2 LUW: Added Microsoft Entra ID authentication.
- Denodo: Updated driver to version 9.0.
- DuckDB: -- Updated driver to version 1.4.4.0. -- Added support for the dollar-quoted string syntax for the SQL Editor.
- MongoDB: Milliseconds are now displayed correctly in the Data Editor.
- MySQL/MariaDB: Fixed issue with ER diagram creation for tables with foreign keys.
- Yellowbrick: Fixed DDL for the stored procedures
- Oracle: Added a new "Set Username to OS_USER" option in the Misc section of Oracle connection settings. Automatically uses the current database username as the operating system user identifier in session metadata when enabled.
- PostgreSQL: Added DDL display support for PostgreSQL policies.
- Redshift: Materialized views are now displayed as system tables with a special icon.
- Salesforce: OAuth authentication was fixed.
- Salesforce Data Cloud: -- Salesforce Data Cloud was renamed to Salesforce Data 360. -- Fixed default schema detection issue. -- Resolved issues with autocomplete in the SQL Editor.
Additional details
Usage instructions
- Run the selected EC2 instance with CloudBeaver.
- Open the link to your new EC2 instance in browser.
- Follow the simple steps to configure your CloudBeaver.
- Share the link with other team-members and start working.
Resources
Vendor resources
Support
Vendor support
Online support support@dbeaver.com
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Standard contract
Customer reviews
Browser-based SQL access has streamlined team collaboration but still needs faster queries and better ML integration
What is our primary use case?
My main use case for CloudBeaver AWS is web-based database access that I can utilize for my entire distributed teams for training and modeling machine learning use cases. For any centralized database management, such as all connections, credentials, and configurations that we need to manage, I can do it perfectly inside CloudBeaver whenever we are using AWS cloud for any model instances or model training on SageMaker . I utilize S3 and EC2 instances for uploading data, but whenever I use CloudBeaver, I can run higher power queries as well, such as whatever it supports in MySQL , PostgreSQL , or MongoDB. All that kind of multi-database support is available inside CloudBeaver AWS . There is easy governance and we can utilize all kinds of local tools as well and easily deployable on EC2 instances or if you want to do it on Kubernetes pods scale then EKS can be utilized as well. Even there are lots of RBAC policies available as well, such as Role-Based Access Control where who can access which databases can be configured and it is very friendly in collaboration.
Whenever I utilize my whole use cases for project delivery in my setup of AI architecture or if any data that I want to look out for in AWS RDS , I will jump into CloudBeaver on EC2 and then will look out for the browser. My whole teams or any groups or any collaboration analytics can be identified and then I can have a Python notebook on top of it for model training. Basically I can connect my database to CloudBeaver tool and can perform all kinds of feature engineering via SQL. I can export my whole data for machine learning model training, and I can get the insights as well. That is the main use case I am trying to set up for CloudBeaver tool in AWS for database extraction process.
My team collaborates within CloudBeaver AWS by utilizing the collaboration option to work out. In the specific organization scenario, if I am having multiple tables or if I want to join the SQL use cases, then I can make some kind of collaboration and I can connect the database to CloudBeaver, do some feature engineering, and model training will be done. Whenever I want to collaborate with my team, I will identify the role-based accesses for all the features and I can give the permissions as well to that whole database and I can make the tracking as well on top of it of how it is getting utilized, how heavy workflows are integrated, and what kind of training setups are done as well. My accesses can be controlled. My role-based access control can be very smooth in CloudBeaver as well here in AWS and it can be very suitable for any machine learning tasks or any data science-related activities.
What is most valuable?
The best features CloudBeaver AWS offers are basically very good for SQL access on AWS. From any RDS , I can collaborate or access any databases and then can jump out towards modeling and can store the models as well. SQL exploration is very smooth. I am getting IAM roles access perfectly. Static credentials can now be changed into IAM accesses and role-based access controls are available as well, secured enough, and perfect enough. Browser-specific database is available so I can control it with any read-write permissions and any queries can be heavily managed as well. Workflow can be added and it can be perfectly managed altogether. All things can be connected, external database or if you have any data warehouses, then also inside CloudBeaver, I can access all these kinds of things. I can make connections to RDS or external databases or any warehouses. All things can be easily configured. I can run my SQL in the browser. I can save the queries. I can run the joins or aggregations that I need to comply on. All of these things is very smooth in CloudBeaver.
The feature that has made the biggest difference for my day-to-day work is browser-based database control, which is very easier in terms of how practical scenarios work. Role-based accesses can be easily assigned as well. Those use cases are very useful for any project delivery. Let me go through one of the project requirements or use cases that I have taken out inside CloudBeaver and how it tailored the whole prospect to understand this thing. The use case is that I am working out with a healthcare-based project where the doctor needs to maintain the kidney reports of the patient. When doctors log into CloudBeaver, the browser-based database, and they will query the patient data. They can get the high-risk patients directly by filtering the patients and can export the reports and share the insights directly. This is how it is very important to identify that just by taking out some kind of clicks, I can get out the whole report and insight and it can be shared as well. It is on the cloud of AWS that is again an achievement. That is where it made the biggest differences.
CloudBeaver AWS has positively impacted my organization because all kinds of browser-based accesses can be made and I can have role-based use cases as well. That gives us the clarity of how the use cases can be covered together and what can be the specific criteria to understand on an organizational level and I can give the accesses towards them as well. In that regard, our organization has maintained this perfectly.
Since using CloudBeaver AWS, my organization has experienced many positive outcomes. Collaboration within the team is perfect. I can manage out what kind of work the team is doing, how the roles can be assigned as well, how their model, how their database is working on the model and we can trace it perfectly as well. That gives me the access to work out on different cardinalities as well. In that regard, I can identify how the costing of the database can be managed as well, what kind of cloud services I can utilize within this whole actionable insights as well. On top of it, whatever the machine learning model that I am building, how efficiency can be generated on the direct SQL queries and the insights can be gained as well. That will analyze my whole results. In that regard, my efficiency and accuracy of the whole approach gets increased and I will be getting out high-level scenarios as well to work out on the cloud instances.
What needs improvement?
CloudBeaver AWS can be improved because in rendering of the queries, if it is very complex or big, the responses in the browser get slowed down. Compared to DBeaver of desktop, it is noticeably very slower on the browser of AWS and heavy data engineering can be done, but it will have very slow responses configured altogether. That needs to be maintained. Even there is no connectivity of machine learning, MLflow kind of thing where Airflow or PySpark approaches can be integrated. Python pipelines can be created but the whole end-to-end machine learning pipeline gets stuck whenever we work out with DBeaver. That again is one of the issues that I would look out for to improve. Also, I need to maintain the infrastructure perfectly here. I need to manage it and need to identify the risks as well. The whole proper setup of VPCs or IAMs needs to be done. It is not a NLQ kind of thing. User queries need to be configured in manual approaches, not automated currently. It should be automated now. Debugging is very painful. That again is a vague approach here. Errors can be executed and we will not be getting out the clarity as well. During this whole approach, the logs are not perfect and intuitive and debugging is also very limited.
The user interface and documentation look good, but I would still suggest improvements.
For how long have I used the solution?
I have been using CloudBeaver AWS for more than two years.
What do I think about the stability of the solution?
CloudBeaver AWS is stable.
How are customer service and support?
The customer support is always top-notch. AWS itself gives me the responses that the customer support team will give me guidance. AI is also integrated for ticket generation and evaluation as well. I receive quicker responses on the pre-generated content of the query response that I am looking out for. That support is again very excellent.
Which solution did I use previously and why did I switch?
I was using cloud solutions from the start of my work, but I also worked on local instances of databases such as MySQL , PostgreSQL , or MongoDB. In comparison, the cloud scenario based which directly worked with CloudBeaver and that worked fine. It is user-friendly as well. The UI is very attractive. You would not be getting bored. Also, everything is perfectly managed, analysis is available. AI integrations are now supported. SAP integrations are getting applied as well. There are many things you can try to work it out here.
How was the initial setup?
My experience with pricing, setup cost, and licensing is that whenever we had to purchase the organization-level subscription for particularly CloudBeaver, first of all, it is free and open source. No licensing cost comes into picture. You just need to pay for AWS infrastructure here. The same goes with pricing and costing. It is very simpler in how we need to maintain CloudBeaver. You just need to pay the infrastructure cost of what utilizes more of your instances, such as EC2 instance, storage of EBS or RDS or anything or any network charges. If we take out any enterprise-based edition, then it starts within a suitable line but goes till very high-based versioning costing as well. That pricing is very suitable to understand how the quality and quantity of the features are inside CloudBeaver. It gives you the 14 days free trial as well for the enterprise level. To deploy CloudBeaver, it is very much easier as well. Directly your payment and costing will be integrated from AWS. Pricing can be set up on AWS infrastructure as well and we can collaborate within the teams on the setup of the production as well. Charges are higher, but it is bearable whenever we look out for the enterprise level addition, but still if it can try to reduce the market level, then it can be more achievable, more approachable as well in terms of what is the current product scenarios on different tools for database accesses as well.
CloudBeaver AWS is deployed in my organization as a public cloud.
I purchased CloudBeaver AWS through the AWS marketplace.
What was our ROI?
I have seen a return on investment because time is saved on many things. As I have told, on multiple projects I have worked on CloudBeaver, but as on the doctor's example that I have given, it can generate the reports of multiple patients altogether. Queries can be slower if they get complex, but it reduces much time as well. If your quantity and size of data are very less, then you would go for a lower or free tier-based mechanism, but if it is having higher and higher based quantity of data, then you would go for some higher approaches. Your infrastructure cost will go higher, but on top of it, your results will be accurate, perfectly managed, secured, encrypted, and efficient enough to understand the quantity, also clickable. Browser-friendly responses are available so you can analyze your data. You can work out with the queries as well. All these things will try to increase the enhancement of software development or any data science development work as well.
Which other solutions did I evaluate?
I have not evaluated other options since I have worked out mostly on AWS. That would be my go-to option.
What other advice do I have?
I totally recommend others looking into using CloudBeaver AWS to work it out. It is very smooth, but if you are a data scientist, then your end-to-end approach will not be perfectly worked. All the database approaches, if you are looking out for on cloud instance, you can directly integrate CloudBeaver to work out on your databases or any credential works as well. I would rate this product a 7 out of 10.
Which deployment model are you using for this solution?
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
DBeaver: Clean UI, Works with Any Database, and Packed with Handy Extras
On the plus side, setup in DBeaver has been pretty easy for me, with no issues there. The only thing I haven’t really tried yet is the AI features, so I can’t say how useful they are for real day-to-day work.
Great for Managing Multiple Databases with Handy Tabbed Queries
Streamlining Database Workflows with DBeaver
I also find the connection management features valuable, since they make it easy to set up and manage database connections and switch between different environments without any hassle. Overall, DBeaver improves my workflow by keeping database testing, validation, and data analysis more efficient, organized, and straightforward.