Listing Thumbnail

    Rhino Self-Managed

     Info
    Sold by: Rhino.ai 
    Deployed on AWS
    Rhino.ai is an AI-powered platform which delivers faster modernization, lower costs, reduced tech debt, and higher-quality enterprise applications through automated discovery, documentation, and requirements generation with full traceability and flexibility. It automatically extracts code, documents, and workflows, organizes the most important business application context and logic into a patented platform-agnostic representation where the application can be re-engineered, and outputs agent-ready, modern application requirements suitable for microservices-based open-source implementations as well as low-code platforms like ServiceNow, Appian, etc.

    Overview

    Rhino.ai is an AI-powered modernization platform that turns legacy black boxes into fully understood, agent-ready applications. It accelerates transformation, reduces costs, cuts technical debt and improves application quality by automating discovery, documentation and requirements generation with full traceability and flexibility.

    Comprehensive Discovery & Documentation Rhino.ai rapidly analyzes and documents complex legacy enterprise applications--including SaaS, low-code and traditional codebases--to build complete visibility across your technology landscape. Its agentic AI extracts business logic automatically from existing code, documentation, databases and workflows, capturing hidden dependencies and inefficiencies. Rhino.ai analyzes legacy code bases, SaaS platforms, natural-language documentation and process manuals, giving you multi-perspective clarity. The platform extracts requirements, process flows and knowledge from technical specs, user manuals and process documents, and it tracks code structures, database schema and APIs across your portfolio.

    Intelligent Documentation & Traceability Discovery results are organized into comprehensive deliverables that support multiple personas. Rhino.ai generates universal application documentation, user stories, test cases and process flowcharts, along with functional documentation, technical architecture mappings, business-rule extractions, modernization roadmaps and implementation-ready specifications. These artifacts are available in structured, machine-readable formats so both human teams and AI agents can consume them. Fine-grained extraction control and coverage statistics provide audit-level evidence that nothing was missed, and source-to-requirement linking delivers unparalleled trust and traceability.

    Platform Architecture & Universal Application Notation At the heart of Rhino.ai is a three-phase platform architecture. In the Understand & Extract phase, AI scans code, documents, SaaS applications and rules to analyze existing systems and identify hidden logic, dependencies and inefficiencies. The Organize & Structure phase captures extracted insights in a structured repository known as Universal Application Notation (UAN). UAN standardizes business logic, giving users the power to refine existing logic before moving forward so they modernize instead of merely migrating. Finally, the Generate & Transform phase converts legacy workflows into scalable applications, supporting SaaS platforms like ServiceNow and Appian, open-source microservices and external agents. UAN outputs can produce both modern applications and comprehensive documentation.

    Deployment & Control The Rhino.ai platform supports flexible deployment models: choose an enterprise-grade SaaS offering or install Rhino.ai in your self-managed environment and bring your own language model. Rhino.ai does not access your databases or any data in your environment; your data stays entirely under your control. You maintain control over security policies, compliance requirements and access controls. You can also use your preferred AI models (OpenAI, Azure OpenAI, Anthropic or your own fine-tuned models), and Rhino.ai comes with a full set of audit trails, citations, and other capabilities which provide full trust and transparency.

    Human & AI-Ready Deliverables With Rhino.ai, you can update requirements or create new ones, produce user stories, test cases, ERDs and flow diagrams to accelerate implementation. Rhino.ai's documentation is ready for both human teams and AI agents. Development teams receive clear, implementation-ready documentation and detailed architecture diagrams for informed decision making. AI agents like AWS Kiro, Windsurf, Cursor, as well as low-code platform agents like Appian Composer, ServiceNow Now Creator, and Outsystems Mentor can immediately consume the Rhino.ai output to power the generation of the new, modernized application.

    Flexible Output & Agentic Transformation Beyond documentation, Rhino.ai offers multiple modernization paths. It can transform legacy code and SaaS apps to modern microservices or SaaS architectures with minimal disruption. Its automated code and SaaS analysis modernizes to modern microservices or SaaS architecture and converts applications to be AI-ready. Rhino.ai also supports replacing outdated processes with agents through agentic workflows, process automation and human-agent collaboration. Rhino.ai gives organizations a secure, flexible and comprehensive modernization platform--delivering faster modernization, lower costs, reduced technical debt and higher-quality applications through automated discovery, documentation and requirements generation with full traceability and flexibility.

    Highlights

    • Extracts detailed functional and technical understanding from the widest variety of legacy code and low-code platforms on the market
    • Generate comprehensive documents with flexible options for structure, detail, tone and more so you can tailor your results to executive, analyst, or technical audiences
    • Update requirements or create new ones, produce user stories, test cases, ERDs, and flow diagrams to accelerate implementation of your reimagined application

    Details

    Delivery method

    Supported services

    Delivery option
    Helm Chart

    Latest version

    Operating system
    Linux

    Deployed on AWS

    Unlock automation with AI agent solutions

    Fast-track AI initiatives with agents, tools, and solutions from AWS Partners.
    AI Agents

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    Rhino Self-Managed

     Info
    Pricing and entitlements for this product are managed through an external billing relationship between you and the vendor. You activate the product by supplying a license purchased outside of AWS Marketplace, while AWS provides the infrastructure required to launch the product. AWS Subscriptions have no end date and may be canceled any time. However, the cancellation won't affect the status of the external license.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    Vendor refund policy

    Please contact support@rhino.ai 

    How can we make this page better?

    We'd like to hear your feedback and ideas on how to improve this page.
    We'd like to hear your feedback and ideas on how to improve this page.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    Helm Chart

    Supported services: Learn more 
    • Amazon EKS
    • Amazon EKS Anywhere
    Helm chart

    Helm charts are Kubernetes YAML manifests combined into a single package that can be installed on Kubernetes clusters. The containerized application is deployed on a cluster by running a single Helm install command to install the seller-provided Helm chart.

    Version release notes

    Release 1.42.0 empowers teams with smarter document workflows, more intuitive domain management, and higher extraction precision, ensuring every project runs with greater transparency, efficiency, and reliability.

    Key Highlights

    1. Document Editing & Generation Enhancements (Feature-Flagged for Internal Use Only) Refined the document editing experience with new tools for easier customization and content precision:

    Click-to-Edit Text: Modify sections directly without switching views.

    Refine and Reset Options: Single-shot refinement prompts and one-click resets for section-level control.

    Verbosity Edits: Each section now retains its configured verbosity and type for consistent outputs and a user can change the verbosity per section between options of concise, moderate, and thorough.

    Improved Templates & Fonts: Finalized document templates now feature optimized typography and enhanced visual structure.

    PDF Export for Stories: Users can now export User Stories and Appian Composer Stories directly to PDF.

    Upgraded visual consistency by removing legacy fonts (Helvetica Neue) and applying the finalized document theme across all screens.

    1. Extraction & Workflow Enhancements Expanded FileNet and JBPM extraction coverage and validation, with additional source elements, references, and benchmarks

    Allowing domain-driven extractions with audit tracking and completeness reporting via UABenchmark.

    Improved JBPM accuracy with refined prompt alignment and benchmarking validation.

    Improved benchmark reporting with new node citation tracking, better evaluator matching, and performance optimizations.

    1. Extraction Experience (Feature-Flagged for Internal Use Only) Users now have full control over domain creation, editing, and deletion, including updates to domain names and descriptions. Domains are displayed in a modern card layout grid for quick reference and improved organization.

    Introduced a new Assets UI for loading and displaying asset information with enhanced visibility and responsiveness.

    1. Quality, Security & Stability Resolved key issues related to domain filters, JBPM citations, markdown formatting, and table exports in UADv2.

    Addressed minor extraction issues, including MSSQL primary key coverage and workflow ID consistency.

    Closed critical security vulnerabilities.

    Added Claude Lite LLM 4 Sonnet validation and dependency updates for stronger model consistency.

    1. Streamlined deployment For self-managed customers, we have removed the extra proxy deployment/pod/service. The frontend deployment/pod/service can handle all HTTP traffic for the Rhino platform now, resulting in a smaller overall Kubernetes footprint.

    Additional details

    Usage instructions

    1. Obtain a license file (.lic), license keys, and generate-license-secret utility script from Rhino.
    2. Run the generate-license-secret script. Provide the file path, key value, and signature key value to the script. Script will generate a Kubernetes manifest for license secret.
    3. Deploy the generated manifest to your desired namespace: kubectl apply -f path/to/manifest.yaml -n <namespace>
    4. If you wish, deploy a DB password secret to your namespace and set global.postgresql.auth.existingSecret and related keys.
    5. Create Helm values file: a. Ensure that "licenseSecret" and "global.storageClass" are set to match your infrastructure b. Configure frontend.service parameters as desired (node port, load balancer) to expose the web application outside the cluster c. Configure "ai.llm" values to match your desired LLM connection (see LLM configuration appendix) d. Configure any optional values, if desired
    6. Deploy: helm install rhino -n <namespace> ...

    LLM Configuration

    Rhino supports connecting to AI models via OpenAI, Azure OpenAI, or LiteLLM proxy. API keys can be stored as Kubernetes secrets and provided to the installation. Below are sample configurations for each provider.

    1. OpenAI Configuration ai: llm: provider: openai model: "gpt-4.1" api_key: secretName: my-openai-secret key: key_in_secret_containing_api_key

    Optional: Use base_url if you route OpenAI connection through a proxy.

    1. Azure OpenAI Configuration ai: llm: provider: azure model: "gpt-4.1" api_base: "https://your-resource.openai.azure.com " api_version: "2024-12-01-preview" engine: "deployment-name" api_key: secretName: my-azure-openai-secret key: key_in_secret_containing_api_key

    Note: Parameters can be found when you click on your deployment in Azure Foundry portal, it will take you to a screen that shows the Python parameters.

    1. LiteLLM Proxy Configuration ai: llm: provider: litellm model: "claude-3-sonnet" base_url: "http://my-litellm-proxy.mycompany.com " proxy_model_name: "my-claude-model" # optional if not matching the value of model api_key: secretName: my-litellm-secret key: key_in_secret_containing_litellm_proxy_api_key

    Currently supports claude-3-sonnet and claude-4-sonnet model types

    OIDC Configuration

    Rhino supports integration with external identity providers (IdPs) using OpenID Connect (OIDC). The following configuration is placed under the global.oauth section in your Helm values file.

    global: oauth: issuerUrl: "https://your-idp.com/oauth2 " userIdClaim: email # or username, sub, etc. existingSecret: my-oidc-secret secretKeys: clientId: key_in_secret_holding_client_id clientSecret: key_in_secret_holding_client_secret

    # Optional fields depending on your IdP: additionalScopes: "openid,email,profile,offline_access" urlParameters: example_key: "example-value" audience: "" # Only if required by your IdP

    issuerUrl: The base URL of your IdP's OIDC discovery endpoint (exclude .well-known) userIdClaim: The claim to use as a unique user ID (e.g., email, username) existingSecret: Kubernetes secret that holds the client credentials clientId, clientSecret: Keys inside the Kubernetes secret

    Optional Values additionalScopes: Needed for refresh token support (commonly includes offline_access) urlParameters: Custom query parameters required by some IdPs in the redirect URL audience: Some IdPs require this to match client/application IDs

    Support

    Vendor support

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Similar products

    Customer reviews

    Ratings and reviews

     Info
    0 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    0%
    0%
    0%
    0%
    0%
    0 AWS reviews
    No customer reviews yet
    Be the first to review this product . We've partnered with PeerSpot to gather customer feedback. You can share your experience by writing or recording a review, or scheduling a call with a PeerSpot analyst.