Listing Thumbnail

    AI inference stress server

     Info
    Sold by: Baideac 
    Deployed on AWS
    This product can help to stress the inference server to test your application at scale.

    Overview

    This product can help to stress the inference server with concurrent queries with custom large data and analyse the server resource utilization (e.g. GPU utilization, GPU memory, CPU utilization and CPU memory) against one of multiple GPUs. Monthly charge is for support and customization on the go.

    Highlights

    • This product can help to determine and analyse the large data
    • You can input any json based data url. Server is able to ingest data and using those data you can chat anything with those data
    • Prior support provide on mail and customization on the go

    Details

    Sold by

    Categories

    Delivery method

    Delivery option
    64-bit (x86) Amazon Machine Image (AMI)

    Latest version

    Operating system
    Ubuntu 22.04

    Deployed on AWS
    New

    Introducing multi-product solutions

    You can now purchase comprehensive solutions tailored to use cases and industries.

    Multi-product solutions

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    AI inference stress server

     Info
    Pricing is based on a fixed subscription cost and actual usage of the product. You pay the same amount each billing period for access, plus an additional amount according to how much you consume. The fixed subscription cost is prorated, so you're only charged for the number of days you've been subscribed. Subscriptions have no end date and may be canceled any time.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    Fixed subscription cost

     Info
    Monthly subscription
    $850.00/month

    Usage costs (15)

     Info
    Dimension
    Cost/hour
    g5.xlarge
    Recommended
    $0.55
    g5.2xlarge
    $0.55
    g5.4xlarge
    $0.55
    g4dn.metal
    $0.55
    g4dn.16xlarge
    $0.55
    g4dn.2xlarge
    $0.55
    g4dn.12xlarge
    $0.55
    g5.8xlarge
    $0.55
    g4dn.xlarge
    $0.55
    g5.24xlarge
    $0.55

    Vendor refund policy

    No refund policy

    How can we make this page better?

    We'd like to hear your feedback and ideas on how to improve this page.
    We'd like to hear your feedback and ideas on how to improve this page.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    64-bit (x86) Amazon Machine Image (AMI)

    Amazon Machine Image (AMI)

    An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.

    Version release notes

    Release notes:

    • Openstack plugin support
    • Llama-bench support for token based benchmarking
    • Minor Bug Fixes

    Additional details

    Usage instructions

    Once the instance is run, all the services will come up automatically once instance boots up.

    It is suggested to manually configure your Security Group/Firewall settings to access your instance. Currently, the 1-Click Security Group opens only to port 22 and 80 so that you can access your instance via SSH using login 'ubuntu' username. If you chose the 1-Click Security Group, you can change it later to enable other applications to use the AWS Console or API.

    To connect to this instance via ssh, you must have a new keypair generated or use an already present keypair pem at the time of launching the EC2 instance. Using this keypair, you can login to the instance as below: ssh -i <key-pair> ubuntu@<public_IP_of_the_instance>.

    Alternately,you can also login to the instance via the AWS console by clicking on the"Connect Instance" option on the EC2 instances page. The username must be "ubuntu".

    The way to the check the logs are

    1. Connect to the instance via ssh or console.
    2. Type "sudo su" to enter super user [No password required]
    3. type "bv-ai-stress verbose" and hit enter. You would be able to see the logs as below. More logs would generated once the inference server is being used.

    root@ip-<>:~$ bv-ai-stress verbose Streaming logs for process: bv_inference_stress (ID: 0) [bv_inference_stress] 2024-09-06 09:07:12 - INFO - 127.0.0.1:35034 - GET / HTTP/1.1 - 200 OK

    [bv_inference_stress] 2024-09-06 09:07:14 - INFO - 127.0.0.1:35036 - GET /models/ HTTP/1.1 - 200 OK

    [bv_inference_stress] 2024-09-06 09:07:14 - INFO - 127.0.0.1:37152 - GET /favicon.ico HTTP/1.1 - 404 Not Found

    [bv_inference_stress] 2024-09-06 09:07:17 - INFO - 127.0.0.1:37166 - POST /queue-count/ HTTP/1.1 - 200 OK

    Note: Right after launching the instance, it may take a few minutes to show the logs. Try after sometime if you do not see any logs. Two option to access platform.

    1. Using UI we can access the platform. we can access UI via http://<instance-public-ip>.
    2. Using cli we can access the platform. We can see help/docs section via this command "bv-ai-stress -h".

    For any issues or queries, contact our support team (at aws-mp-support@bhojr.com ). For more information on the product, check out the link: https://www.bhojr.com/prod/ai-inference-stresser.html 

    Resources

    Vendor resources

    Support

    Vendor support

    Support can be available on mail address support@baideac.com 

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Product comparison

     Info
    Updated weekly

    Accolades

     Info
    Top
    100
    In Generative AI
    Top
    10
    In Summarization-Text, Generation-Text
    Top
    10
    In Serverless Workloads

    Overview

     Info
    AI generated from product descriptions
    Concurrent Query Simulation
    Ability to stress inference server with concurrent queries to test application performance at scale
    Resource Utilization Monitoring
    Analysis of server resource utilization metrics including GPU utilization, GPU memory, CPU utilization, and CPU memory
    Custom Data Input
    Support for ingesting custom large data through JSON-based data URLs for testing purposes
    Multi-GPU Support
    Capability to test against multiple GPU configurations
    Data Analysis
    Determination and analysis of large datasets for inference testing scenarios
    Model Quantization Support
    Support for 2-bit, 3-bit, 4-bit, 5-bit, 6-bit and 8-bit integer quantization enabling inference on GPUs with 16GB or 24GB memory
    Inference Engine
    Llama.cpp inference with plain C/C++ implementation without dependencies, supporting interactive and server mode operations
    GPU and CPU Hybrid Processing
    Capability to run inference simultaneously on GPU and CPU, allowing execution of larger models when GPU memory is insufficient
    Multi-framework Support
    Integration with llama-cpp-python for OpenAI API compatibility, Open Interpreter for code execution, and Tabby coding assistant for IDE integration
    GPU Container Provisioning
    Spin up GPU-enabled containers in as little as one second with custom infrastructure for rapid iteration and scaling.
    Autoscaling Capability
    Automatically scale resources from zero to hundreds of GPUs and back down based on workload demands without manual infrastructure management.
    Infrastructure as Code Deployment
    Deploy Python functions to the cloud using infrastructure-as-code to define custom container images and hardware requirements.
    Serverless Compute Architecture
    Serverless compute platform that abstracts infrastructure management for ML inference, fine-tuning, and batch data processing workloads.
    Pay-Per-Use Resource Billing
    Resource-based billing model that charges only for the actual compute time consumed during workload execution.

    Contract

     Info
    Standard contract

    Customer reviews

    Ratings and reviews

     Info
    0 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    0%
    0%
    0%
    0%
    0%
    0 reviews
    No customer reviews yet
    Be the first to review this product . We've partnered with PeerSpot to gather customer feedback. You can share your experience by writing or recording a review, or scheduling a call with a PeerSpot analyst.