
Overview
This is a hybrid classical-quantum machine learning based solution which detects cracks on concrete and other civil infrastructure surface images. The algorithm uses a pre-trained DCGAN before variational quantum circuit. The DCGAN model is trained on in hand dataset to perform feature transformation. This trained DCGAN model enhances feature to be used as input in quantum architecture. The algorithm used in this solution inherits variational quantum circuit layers with trained parameters dedicated for surface crack image classification.
Highlights
- The appearance of cracks and distortions can be visually unattractive and disconcerting for occupants, and if left untreated they can affect the integrity, safety and stability of the structure. In case of railway bridge, flyover or foot bridge, it is crucial to regularly inspect the structures for cracks or any other defects. this solution can be used by agencies like Municipalities, review boards, construction comapanies to moniter the civil structure health and take corrective action when necessary.
- Quantum based Surface Crack Detection solution analyzes the images of concrete surfaces and predicts presence or absence of cracks. The current solution provides quantum ML based alternative to state of the art classifical deep learning based image classification systems.
- Need Customized Deep learning and Machine Learning Solutions? Get in Touch!
Details
Introducing multi-product solutions
You can now purchase comprehensive solutions tailored to use cases and industries.
Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.m5.large Inference (Batch) Recommended | Model inference on the ml.m5.large instance type, batch mode | $40.00 |
ml.m5.large Inference (Real-Time) Recommended | Model inference on the ml.m5.large instance type, real-time mode | $20.00 |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $40.00 |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $40.00 |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $40.00 |
ml.m5.2xlarge Inference (Batch) | Model inference on the ml.m5.2xlarge instance type, batch mode | $40.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $40.00 |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $40.00 |
ml.c5.2xlarge Inference (Batch) | Model inference on the ml.c5.2xlarge instance type, batch mode | $40.00 |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $40.00 |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
This is version 2.1. Bug Fixes.
Additional details
Inputs
- Summary
- The input dataset should be a zip folder containing images in png format.
- Input zip folder should not contain more than 5 images.
- Input MIME type
- application/zip
Resources
Vendor resources
Support
Vendor support
For any assistance reach out to us at:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products



Customer reviews
Data-driven testing has transformed how I optimize messaging and improve conversion decisions
What is our primary use case?
My main use case for MPhasis Testing Services involves both professional and business applications. I use it to evaluate how messaging variations impact customer engagement and conversion rates, and to test which headlines or key phrases drive a higher click-through rate in marketing campaigns. I optimize website content for better user attention and retention, improve product message clarity before launching new features, and validate branding language across different target audiences. In research and analytics use cases, I run AB testing on the messaging in controlled experiments and gather data-driven insights on how emphasis affects decision-making behavior. Additionally, I use this for educational content creation and optimizing marketing messaging through AB testing to improve engagement and conversion rates by testing different emphasis.
In one project, we were launching a new SaaS feature and needed to decide the value proposition to highlight on the landing page with MPhasis Testing Services . We were uncertain whether customers cared more about cost-saving, automation speed, or accuracy improvement. Using MPhasis Testing Services, I created multiple versions of the landing page, each emphasizing a different benefit in the main headline, subheadings, and call to action. The rest of the content remained the same to isolate the impact of the emphasized message. We ran the test for two weeks and tracked metrics including click-through rate, demo sign-up, and time on page. The version emphasizing automation speed outperformed the others by 32% in demo booking. Interestingly, cost-saving, which I originally thought would win, performed the worst. Based on those results, I updated our campaign message across ads, email sequences, and sales collateral to focus more heavily on speed and efficiency. That single shift improved overall conversion rates for the quarter and helped align our marketing with what customers actually valued.
MPhasis Testing Services helps me improve my work by replacing assumptions with data-driven decisions, clarifying what truly resonates with our audience. It reduces risk before scaling campaigns and improves ROI by focusing efforts on proven messaging. It fits into my workflow in several ways. For early-stage validation, before finalizing messaging, product positioning, or UI emphasis, I use testing during the draft stage. Instead of debating internally, we test early variations in small, controlled batches, preventing late-stage rework. For iterative optimization, I treat emphasis testing as iterative rather than running a single AB test and stopping, proceeding through phase one, phase two, and phase three. In phase one, I test the primary value proposition; in phase two, I refine sub-heading emphasis; in phase three, I optimize call-to-action phrasing. Each phase builds on validated insights from the previous one. For cross-channel alignment, once I identify what message performs best, I apply that emphasis consistently across ads, email campaigns, website copy, and sales enablement materials. The fourth point is risk reduction before scaling, and the final consideration is internal decision support. Overall, MPhasis Testing Services helps me reduce guesswork, increase conversion efficiency, improve consistency in messaging, shorten feedback loops, and make more confident strategic decisions.
How has it helped my organization?
MPhasis Testing Services has positively impacted my organization by improving conversion rates. The most obvious impact is a performance lift created by systematically testing. We have improved key metrics including click-through rate, demo bookings, and sign-ups. In several campaigns, small messaging changes driven by testing results have led to double-digit increases in conversion. Instead of redesigning entire pages, we often refine emphasis, headlines, value statements, or call-to-actions. It reduces guesswork and subjectivity and promotes a faster experimentation culture. Testing data has made us natural decision-makers, reducing internal debates, shortening the feedback cycle, and increasing confidence in our annual decisions.
For example, in a recent SaaS feature launch campaign where we were promoting new automation features, we needed to determine which core benefits to emphasize. We ran tests comparing two or three versions: version A emphasizing reducing operational costs, version B focusing on making tasks three times faster, and version C about improving accuracy by 40% or 50%. We ran the test over 18 days with roughly equal traffic split across the three variations. The results showed that cost-saving in version A had a low conversion rate; version B for speed had a conversion rate of 6.1% and a demo booking rate of 3.0%, while version C for accuracy had a conversion rate of 5% and a demo booking rate of 2.3%. The impact was a 27% increase in the overall conversion rate along with a 41% lower cost per demo due to improved efficiency, as version B clearly outperformed the others. Consequently, I revised email subject lines to highlight automation benefits and adjusted our sales pitch.
What is most valuable?
The best features MPhasis Testing Services offers include AB and multivariate testing, real-time results and analytics, audience segmentation, cross-channel integration, statistical confidence reporting, automated winner selection, heatmaps and attention tracking, easy experiment setup and templates, integration with analytics and CRM tools, and custom reporting dashboards. The parts that stand out the most to me are the real-time analytics and statistical confidence because they turn raw data into actionable decisions without second-guessing. I use audience segmentation afterward because emphasis can hit different user groups in very different ways, making this personalization data-driven.
I utilize the real-time analytics and audience segmentation features of MPhasis Testing Services, which provide the most practical value in my day-to-day work. With the real-time analytics, I use it for early signal detection and faster iteration cycles. In early signal detection, when a test goes live, I monitor performance within the first 24 to 48 hours, not looking for the final conclusion yet, just directional signals. If one variation is clearly underperforming, such as significantly lower CTR, I can pause it early to prevent wasted traffic. The second feature is faster iteration cycles where, instead of waiting a week for static reports, I prefer the real-time dashboard. It lets me spot trends quickly, adjust targeting, and refine messages mid-flight. This experimentation loop dramatically aids budget protection and stakeholder transparency because it is easier to align teams when everyone can see live performance data instead of relying on subjective feedback. This makes testing easier, reduces guesswork, speeds up decision-making, and prevents small mistakes from becoming expensive ones. For audience segmentation, I use it for identifying different message strengths because often one emphasis does not work equally well for all users. Once I identify a pattern, I implement version A for new users and version B for returning users.
What needs improvement?
MPhasis Testing Services could be improved in several ways. First, it could provide faster statistical clarity because sometimes it takes longer than expected to reach statistical significance. It could benefit from a better adaptive testing model with smart traffic allocation, early signal indicators without over-claiming confidence, and built-in guidance for required sample size before launch. Additionally, I seek deeper behavioral insights beyond clicks because most platforms focus heavily on CTR and conversion. It would be valuable to have stronger integration with qualitative data and emotional sentiment tracking, allowing us to tie engagement actions such as scroll depth to specific messaging elements.
I also envision smarter AI-powered recommendations which do not just report which version wins but automatically suggest new emphasis angles, highlight patterns across past experiments, and provide recommendations based on audience segment behavior. Moreover, more proactive insights with less manual analysis would be beneficial, possibly including built-in best practices for test validation before launch.
For how long have I used the solution?
I have been using MPhasis Testing Services for two years.
What other advice do I have?
I recommend others looking into using MPhasis Testing Services to start with a clear hypothesis and not test randomly. It is important to know what you are trying to learn; for example, my belief is that emphasizing speed will outperform cost-saving for our users. This approach makes the results actionable instead of just conducting a test. Additionally, I advise testing high-impact areas first, focusing on pages or touchpoints that have meaningful traffic, such as landing page headlines, call-to-action buttons, pricing page value statements, and demo request forms, because small improvements in high-intent areas create outsized impact. One should avoid testing too many variables at once to ensure sufficient traffic; run tests longer, focus on bigger messaging, and prioritize high-volume pages, as statistical significance requires a sufficient sample size. If traffic is low, run the test longer, as testing without enough data may lead to misleading conclusions, or use segmentation early. I give MPhasis Testing Services a rating of 8 out of 10.