Artificial Intelligence

Streamline code migration using Amazon Nova Premier with an agentic workflow

Many enterprises are burdened with mission-critical systems built on outdated technologies that have become increasingly difficult to maintain and extend.

This post demonstrates how you can use the Amazon Bedrock Converse API with Amazon Nova Premier within an agentic workflow to systematically migrate legacy C code to modern Java/Spring framework applications. By breaking down the migration process into specialized agent roles and implementing robust feedback loops, organizations can accomplish the following:

  • Reduce migration time and cost – Automation handles repetitive conversion tasks while human engineers focus on high-value work.
  • Improve code quality – Specialized validation agents make sure the migrated code follows modern best practices.
  • Minimize risk – The systematic approach prevents critical business logic loss during migration.
  • Enable cloud integration – The resulting Java/Spring code can seamlessly integrate with AWS services.

Challenges

Code migration from legacy systems to modern frameworks presents several significant challenges that require a balanced approach combining AI capabilities with human expertise:

  • Language paradigm differences – Converting C code to Java involves navigating fundamental differences in memory management, error handling, and programming paradigms. C’s procedural nature and direct memory manipulation contrast sharply with Java’s object-oriented approach and automatic memory management. Although AI can handle many syntactic transformations automatically, developers must review and validate the semantic correctness of these conversions.
  • Architectural complexity – Legacy systems often feature complex interdependencies between components that require human analysis and planning. In our case, the C code base contained intricate relationships between modules, with some TPs (Transaction Programs) connected to as many as 12 other modules. Human developers must create dependency mappings and determine migration order, typically starting from leaf nodes with minimal dependencies. AI can assist in identifying these relationships, but the strategic decisions about migration sequencing require human judgment.
  • Maintaining business logic – Making sure critical business logic is accurately preserved during translation requires continuous human oversight. Our analysis showed that although automatic migration is highly successful for simple, well-structured code, complex business logic embedded in larger files (over 700 lines) requires careful human review and often manual refinement to prevent errors or omissions.
  • Inconsistent naming and structures – Legacy code often contains inconsistent naming conventions and structures that must be standardized during migration. AI can handle many routine transformations—converting alphanumeric IDs in function names, transforming C-style error codes to Java exceptions, and converting C structs into Java classes—but human developers must establish naming standards and review edge cases where automated conversion may be ambiguous.
  • Integration complexity – After converting individual files, human-guided integration is essential for creating a cohesive application. Variable names that were consistent across the original C files often become inconsistent during individual file conversion, requiring developers to perform reconciliation work and facilitate proper inter-module communication.
  • Quality assurance – Validating that converted code maintains functional equivalence with the original requires a combination of automated testing and human verification. This is particularly critical for complex business logic, where subtle differences can lead to significant issues. Developers must design comprehensive test suites and perform thorough code reviews to ensure migration accuracy.

These challenges necessitate a systematic approach that combines the pattern recognition capabilities of large language models (LLMs) with structured workflows and essential human oversight to produce successful migration outcomes. The key is using AI to handle routine transformations while keeping humans in the loop for strategic decisions, complex logic validation, and quality assurance.

Solution overview

The solution employs the Amazon Bedrock Converse API with Amazon Nova Premier to convert legacy C code to modern Java/Spring framework code through a systematic agentic workflow. This approach breaks down the complex migration process into manageable steps, allowing for iterative refinement and handling of token limitations. The solution architecture consists of several key components:

  • Code analysis agent – Analyzes C code structure and dependencies
  • Conversion agent – Transforms C code to Java/Spring code
  • Security assessment agent – Identifies vulnerabilities in legacy and migrated code
  • Validation agent – Verifies conversion completeness and accuracy
  • Refine agent – Rewrites the code based on the feedback from the validation agent
  • Integration agent – Combines individually converted files

Our agentic workflow is implemented using a Strands Agents framework combined with the Amazon Bedrock Converse API for robust agent orchestration and LLM inference. The architecture (as shown in the following diagram) uses a hybrid approach that combines Strands’s session management capabilities with custom BedrockInference handling for token continuation.

The solution uses the following core technologies:

  • Strands Agents framework (v1.1.0+) – Provides agent lifecycle management, session handling, and structured agent communication
  • Amazon Bedrock Converse API – Powers the LLM inference with Amazon Nova Premier model
  • Custom BedrockInference class – Handles token limitations through text prefilling and response continuation
  • Asyncio-based orchestration – Enables concurrent processing and non-blocking agent execution

The workflow consists of the following steps:

1. Code analysis:

  • Code analysis agent – Performs input code analysis to understand the conversion requirements. Examines C code base structure, identifies dependencies, and assesses complexity.
  • Framework integration – Uses Strands for session management while using BedrockInference for analysis.
  • Output – JSON-structured analysis with dependency mapping and conversion recommendations.

2. File categorization and metadata creation:

  • ImplementationFileMetadata data class with complexity assessment.
  • Categories – Simple (0–300 lines), Medium (300–700 lines), Complex (over 700 lines).
  • File types – Standard C files, header files, and database I/O (DBIO) files.

3. Individual file conversion:

  • Conversion agent – Performs code migration on individual files based on the information from the code analysis agent.
  • Token handling – Uses the stitch_output() method for handling large files that exceed token limits.

4. Security assessment phase:

  • Security assessment agent – Performs comprehensive vulnerability analysis on both legacy C code and converted Java code.
  • Risk categorization – Classifies security issues by severity (Critical, High, Medium, Low).
  • Mitigation recommendations – Provides specific code fixes and security best practices.
  • Output – Detailed security report with actionable remediation steps.

5. Validation and feedback loop:

  • Validation agent – Analyzes conversion completeness and accuracy.
  • Refine agent – Applies iterative improvements based on validation results.
  • Iteration control – Maximum five feedback iterations with early termination on satisfactory results.
  • Session persistence – Strands framework maintains conversation context across iterations.

6. Integration and finalization:

  • Integration agent – Attempts to combine individually converted files.
  • Consistency resolution – Standardizes variable naming and provides proper dependencies.
  • Output generation – Creates cohesive Java/Spring application structure.

7. DBIO conversion (specialized)

  • Purpose – Converts SQL DBIO C source code to MyBatis XML mapper files.
  • Framework – Uses the same Strands and BedrockInference hybrid approach for consistency.

The solution consists of the following key orchestration features:

  • Session persistence – Each conversion maintains session state across agent interactions.
  • Error recovery – Comprehensive error handling with graceful degradation.
  • Performance tracking – Built-in metrics for processing time, iteration counts, and success rates.
  • Token continuation – Seamless handling of large files through response stitching.

This framework-specific implementation facilitates reliable, scalable code conversion while maintaining the flexibility to handle diverse C code base structures and complexities.

Prerequisites

Before implementing this code conversion solution, make sure you have the following components configured:

  • AWS environment:
    • AWS account with appropriate permissions for Amazon Bedrock with Amazon Nova Premier model access
    • Amazon Elastic Compute Cloud (Amazon EC2) instance (t3.medium or larger) for development and testing or development environment in local machine
  • Development setup:
    • Python 3.10+ installed with Boto3 SDK and Strands Agents
    • AWS Command Line Interface (AWS CLI) configured with appropriate credentials and AWS Region
    • Git for version control of legacy code base and converted code
    • Text editor or integrated development environment (IDE) capable of handling both C and Java code bases
  • Source and target code base requirements:
    • C source code organized in a structured directory format
    • Java 11+ and Maven/Gradle build tools
    • Spring Framework 5.x or Spring Boot 2.x+ dependencies

The source code and prompts used in the post can be found in the GitHub repo.

Agent-based conversion process

The solution uses a sophisticated multi-agent system implemented using the Strands framework, where each agent specializes in a specific aspect of the code conversion process. This distributed approach provides thorough analysis, accurate conversion, and comprehensive validation while maintaining the flexibility to handle diverse code structures and complexities.

Strands framework integration

Each agent extends the BaseStrandsConversionAgent class, which provides a hybrid architecture combining Strands session management with custom BedrockInference capabilities:

class BaseStrandsConversionAgent(ABC):
    def (self, name: str, bedrock_inference, system_prompt: str):
        self.name = name
        self.bedrock = bedrock_inference  # Custom BedrockInference for token handling
        self.system_prompt = system_prompt
        
        # Create strands agent for session management
        self.strands_agent = Agent(name=name, system_prompt=system_prompt)
    
    async def execute_async(self, context: ConversionContext) -> Dict[str, Any]:
        # Implemented by each specialized agent
        pass

Code analysis agent

The code analysis agent examines the structure of the C code base, identifying dependencies between files and determining the optimal conversion strategy. This agent helps prioritize which files to convert first and identifies potential challenges. The following is the prompt template for the code analysis agent:

You are a Code Analysis Agent with expertise in legacy C codebases and modern Java/Spring architecture.

<c_codebase>
{c_code}
</c_codebase>

## TASK
Your task is to analyze the provided C code to prepare for migration.

Perform a comprehensive analysis and provide the following:
## INSTRUCTIONS

1. DEPENDENCY ANALYSIS:
   - Identify all file dependencies (which files include or reference others)
   - Map function calls between files
   - Detect shared data structures and global variables

2. COMPLEXITY ASSESSMENT:
   - Categorize each file as Simple (0-300 lines), Medium (300-700 lines), or Complex (700+ lines)
   - Identify files with complex control flow, pointer manipulation, or memory management
   - Flag any platform-specific or hardware-dependent code

3. CONVERSION PLANNING:
   - Recommend a conversion sequence (which files to convert first)
   - Suggest logical splitting points for large files
   - Identify common patterns that can be standardized during conversion

4. RISK ASSESSMENT:
   - Highlight potential conversion challenges (e.g., pointer arithmetic, bitwise operations)
   - Identify business-critical sections requiring special attention
   - Note any undocumented assumptions or behaviors

5. ARCHITECTURE RECOMMENDATIONS:
   - Suggest appropriate Java/Spring components for each C module
   - Recommend DTO structure and service organization
   - Propose database access strategy using a persistence framework

Format your response as a structured JSON document with these sections.

Conversion agent

The conversion agent handles the actual transformation of C code to Java/Spring code. This agent is assigned the role of a senior software developer with expertise in both C and Java/Spring frameworks. The prompt template for the conversion agent is as follows:

You are a Senior Software Developer with 15+ years of experience in both C and Java Spring framework. 

<c_file>
{c_code}
</c_file>

## TASK
Your task is to convert legacy C code to modern Java Spring code with precision and completeness.

## CONVERSION GUIDELINES:

1. CODE STRUCTURE:
   - Create appropriate Java classes (Service, DTO, Mapper interfaces)
   - Preserve original function and variable names unless they conflict with Java conventions
   - Use Spring annotations appropriately (@Service, @Repository, etc.)
   - Implement proper package structure based on functionality

2. JAVA BEST PRACTICES:
   - Use Lombok annotations (@Data, @Slf4j, @RequiredArgsConstructor) to reduce boilerplate
   - Implement proper exception handling instead of error codes
   - Replace pointer operations with appropriate Java constructs
   - Convert C-style arrays to Java collections where appropriate

3. SPRING FRAMEWORK INTEGRATION:
   - Use dependency injection instead of global variables
   - Implement a persistence framework mappers for database operations
   - Replace direct SQL calls with mapper interfaces
   - Use Spring's transaction management

4. SPECIFIC TRANSFORMATIONS:
   - Replace PFM_TRY/PFM_CATCH with Java try-catch blocks
   - Convert mpfmdbio calls to a persistence framework mapper method calls
   - Replace mpfm_dlcall with appropriate Service bean injections
   - Convert NGMHEADER references to input.getHeaderVo() calls
   - Replace PRINT_ and PFM_DBG macros with SLF4J logging
   - Convert ngmf_ methods to CommonAPI.ngmf method calls

5. DATA HANDLING:
   - Create separate DTO classes for input and output structures
   - Use proper Java data types (String instead of char arrays, etc.)
   - Implement proper null handling and validation
   - Remove manual memory management code

## OUTPUT FORMAT:
   - Include filename at the top of each Java file: #filename: [filename].java
   - Place executable Java code inside <java></java> tags
   - Organize multiple output files clearly with proper headers

Generate complete, production-ready Java code that fully implements all functionality from the original C code.

Security assessment agent

The security assessment agent performs comprehensive vulnerability analysis on the original C code and the converted Java code, identifying potential security risks and providing specific mitigation strategies. This agent is crucial for making sure security vulnerabilities are not carried forward during migration and new code follows security best practices. The following is the prompt template for the security assessment agent:

You are a Security Assessment Agent with expertise in identifying vulnerabilities in both C and Java codebases, specializing in secure code migration practices.

ORIGINAL C CODE:
<c_code>
{c_code}
</c_code>

CONVERTED JAVA CODE:
<java_code>
{java_code}
</java_code>

## TASK
Your task is to perform comprehensive security analysis on both the legacy C code and converted Java code, identifying vulnerabilities and providing specific mitigation recommendations.

## SECURITY ANALYSIS FRAMEWORK

1. **LEGACY C CODE VULNERABILITIES:**
   - Buffer overflow risks (strcpy, strcat, sprintf usage)
   - Memory management issues (dangling pointers, memory leaks)
   - Integer overflow/underflow vulnerabilities
   - Format string vulnerabilities
   - Race conditions in multi-threaded code
   - Improper input validation and sanitization
   - SQL injection risks in database operations
   - Insecure cryptographic implementations

2. **JAVA CODE SECURITY ASSESSMENT:**
   - Input validation and sanitization gaps
   - SQL injection vulnerabilities in persistence framework queries
   - Improper exception handling that leaks sensitive information
   - Authentication and authorization bypass risks
   - Insecure deserialization vulnerabilities
   - Cross-site scripting (XSS) prevention in web endpoints
   - Logging of sensitive data
   - Dependency vulnerabilities in Spring framework usage

3. **MIGRATION-SPECIFIC RISKS:**
   - Security assumptions that don't translate between languages
   - Privilege escalation through improper Spring Security configuration
   - Data exposure through overly permissive REST endpoints
   - Session management vulnerabilities
   - Configuration security (hardcoded credentials, insecure defaults)

4. **COMPLIANCE AND BEST PRACTICES:**
   - OWASP Top 10 compliance assessment
   - Spring Security best practices implementation
   - Secure coding standards adherence
   - Data protection and privacy considerations

## OUTPUT FORMAT
Provide your analysis as a structured JSON with these fields:
- "critical_vulnerabilities": array of critical security issues requiring immediate attention
- "security_risk_issues": array of security concerns
- "secure_code_recommendations": specific code changes to implement security fixes
- "spring_security_configurations": recommended Spring Security configurations
- "compliance_gaps": areas where code doesn't meet security standards
- "migration_security_notes": security considerations specific to the C-to-Java migration

For each vulnerability, include:
- Description of the security risk
- Potential impact and attack vectors
- Specific line numbers or code sections affected
- Detailed remediation steps with code examples
- Priority level and recommended timeline for fixes

Be thorough in identifying both obvious and subtle security issues that could be exploited in production environments.

Validation agent

The validation agent reviews the converted code to identify missing or incorrectly converted components. This agent provides detailed feedback that is used in subsequent conversion iterations. The prompt template for the validation agent is as follows:

You are a Code Validation Agent specializing in verifying C to Java/Spring migrations.

ORIGINAL C CODE:
<c_code>
{c_code}
</c_code>

CONVERTED JAVA CODE:
<java_code>
{java_code}
</java_code>

## TASK
Your task is to thoroughly analyze the conversion quality and identify any issues or omissions.
Perform a comprehensive validation focusing on these aspects:

## INSTRUCTIONS
1. COMPLETENESS CHECK:
   - Verify all functions from C code are implemented in Java
   - Confirm all variables and data structures are properly converted
   - Check that all logical branches and conditions are preserved
   - Ensure all error handling paths are implemented

2. CORRECTNESS ASSESSMENT:
   - Identify any logical errors in the conversion
   - Verify proper transformation of C-specific constructs (pointers, structs, etc.)
   - Check for correct implementation of memory management patterns
   - Validate proper handling of string operations and byte manipulation

3. SPRING FRAMEWORK COMPLIANCE:
   - Verify appropriate use of Spring annotations and patterns
   - Check proper implementation of dependency injection
   - Validate correct use of persistence framework mappers
   - Ensure proper service structure and organization

4. CODE QUALITY EVALUATION:
   - Assess Java code quality and adherence to best practices
   - Check for proper exception handling
   - Verify appropriate logging implementation
   - Evaluate overall code organization and readability

## OUTPUT FORMAT
Provide your analysis as a structured JSON with these fields:
- "complete": boolean indicating if conversion is complete
- "missing_elements": array of specific functions, variables, or logic blocks that are missing
- "incorrect_transformations": array of elements that were incorrectly transformed
- "spring_framework_issues": array of Spring-specific implementation issues
- "quality_concerns": array of code quality issues
- "recommendations": specific, actionable recommendations for improvement

Be thorough and precise in your analysis, as your feedback will directly inform the next iteration of the conversion process.

Feedback loop implementation with refine agent

The feedback loop is a critical component that enables iterative refinement of the converted code. This process involves the following steps:

  1. Initial conversion by the conversion agent.
  2. Security assessment by the security assessment agent.
  3. Validation by the validation agent.
  4. Feedback incorporation by the refine agent (incorporating both validation and security feedback).
  5. Repeat until satisfactory results are achieved.

The refine agent incorporates security vulnerability fixes alongside functional improvements, and security assessment results are provided to development teams for final review and approval before production deployment. The following code is the prompt template for code refinement:

You are a Senior Software Developer specializing in C to Java/Spring migration with expertise in secure coding practices.
ORIGINAL C CODE:
<c_code>
{c_code}
</c_code>

YOUR PREVIOUS JAVA CONVERSION:
<previous_java>
{previous_java_code}
</previous_java>

VALIDATION FEEDBACK:
<validation_feedback>
{validation_feedback}
</validation_feedback>

SECURITY ASSESSMENT:
<security_feedback>
{security_feedback}
</security_feedback>

## TASK
You've previously converted C code to Java, but validation and security assessment have identified issues that need to be addressed. Your task is to improve the conversion by addressing all identified functional and security issues while maintaining complete functionality.
## INSTRUCTIONS
1. ADDRESSING MISSING ELEMENTS:
   - Implement any functions, variables, or logic blocks identified as missing
   - Ensure all control flow paths from the original code are preserved
   - Add any missing error handling or edge cases
2. CORRECTING TRANSFORMATIONS:
   - Fix any incorrectly transformed code constructs
   - Correct any logical errors in the conversion
   - Properly implement C-specific patterns in Java
3. IMPLEMENTING SECURITY FIXES:
   - Address all critical and high-risk security vulnerabilities identified
   - Implement secure coding practices (input validation, parameterized queries, etc.)
   - Replace insecure patterns with secure Java/Spring alternatives
   - Add proper exception handling that does not leak sensitive information
4. IMPROVING SPRING IMPLEMENTATION:
   - Correct any issues with Spring annotations or patterns
   - Ensure proper dependency injection and service structure
   - Fix persistence framework mapper implementations if needed
   - Implement Spring Security configurations as recommended
5. MAINTAINING CONSISTENCY:
   - Ensure naming conventions are consistent throughout the code
   - Maintain consistent patterns for similar operations
   - Preserve the structure of the original code where appropriate
## OUTPUT FORMAT
Output the improved Java code inside <java></java> tags, with appropriate file headers. Ensure all security vulnerabilities are addressed while maintaining complete functionality from the original C code.

Integration agent

The integration agent combines individually converted Java files into a cohesive application, resolving inconsistencies in variable naming and providing proper dependencies. The prompt template for the integration agent is as follows:

You are an Integration Agent specializing in combining individually converted Java files into a cohesive Spring application. 

CONVERTED JAVA FILES:
<converted_files>
{converted_java_files}
</converted_files>

ORIGINAL FILE RELATIONSHIPS:
<relationships>
{file_relationships}
</relationships>

## TASK
Your task is to integrate multiple Java files that were converted from C, ensuring they work together properly.

Perform the following integration tasks:
## INSTRUCTIONS
1. DEPENDENCY RESOLUTION:
   - Identify and resolve dependencies between services and components
   - Ensure proper autowiring and dependency injection
   - Verify that service method signatures match their usage across files

2. NAMING CONSISTENCY:
   - Standardize variable and method names that should be consistent across files
   - Resolve any naming conflicts or inconsistencies
   - Ensure DTO field names match across related classes

3. PACKAGE ORGANIZATION:
   - Organize classes into appropriate package structure
   - Group related functionality together
   - Ensure proper import statements across all files

4. SERVICE COMPOSITION:
   - Implement proper service composition patterns
   - Ensure services interact correctly with each other
   - Verify that data flows correctly between components

5. COMMON COMPONENTS:
   - Extract and standardize common utility functions
   - Ensure consistent error handling across services
   - Standardize logging patterns

6. CONFIGURATION:
   - Create necessary Spring configuration classes
   - Set up appropriate bean definitions
   - Configure any required properties or settings

Output the integrated Java code as a set of properly organized files, each with:
- Appropriate package declarations
- Correct import statements
- Proper Spring annotations
- Clear file headers (#filename: [filename].java)

Place each file's code inside <java></java> tags. Ensure the integrated application maintains all functionality from the individual components while providing a cohesive structure.

DBIO conversion agent

This specialized agent handles the conversion of SQL DBIO C source code to XML files compatible with persistence framework in the Java Spring framework. The following is the prompt template for the DBIO conversion agent:

You are a Database Integration Specialist with expertise in converting C-based SQL DBIO code to persistence framework XML mappings for Spring applications. 

SQL DBIO C SOURCE CODE:
<sql_dbio>
{sql_dbio_code}
</sql_dbio>

## TASK
Your task is to transform the provided SQL DBIO C code into properly structured persistence framework XML files.

Perform the conversion following these guidelines:

## INSTRUCTIONS
1. XML STRUCTURE:
   - Create a properly formatted persistence framework mapper XML file
   - Include appropriate namespace matching the Java mapper interface
   - Set correct resultType or resultMap attributes for queries
   - Use proper persistence framework XML structure and syntax

2. SQL TRANSFORMATION:
   - Preserve the exact SQL logic from the original code
   - Convert any C-specific SQL parameter handling to persistence framework parameter markers
   - Maintain all WHERE clauses, JOIN conditions, and other SQL logic
   - Preserve any comments explaining SQL functionality

3. PARAMETER HANDLING:
   - Convert C variable bindings to persistence framework parameter references (#{param})
   - Handle complex parameters using appropriate persistence framework techniques
   - Ensure parameter types match Java equivalents (String instead of char[], etc.)

4. RESULT MAPPING:
   - Create appropriate resultMap elements for complex result structures
   - Map column names to Java DTO property names
   - Handle any type conversions needed between database and Java types

5. DYNAMIC SQL:
   - Convert any conditional SQL generation to persistence framework dynamic SQL elements
   - Use <if>, <choose>, <where>, and other dynamic elements as appropriate
   - Maintain the same conditional logic as the original code

6. ORGANIZATION:
   - Group related queries together
   - Include clear comments explaining the purpose of each query
   - Follow persistence framework best practices for mapper organization

## OUTPUT FORMAT
Output the converted persistence framework XML inside <xml></xml> tags. Include a filename comment at the top: #filename: [EntityName]Mapper.xml

Ensure the XML is well-formed, properly indented, and follows persistence framework conventions for Spring applications.

Handling token limitations

To address token limitations in the Amazon Bedrock Converse API, we implemented a text prefilling technique that allows the model to continue generating code where it left off. This approach is particularly crucial for large files that exceed the model’s context window and represents a key technical innovation in our Strands-based implementation.

Technical implementation

The following code implements the BedrockInference class with continuation support:

class BedrockInference:
    def __init__(self, region_name: str = "us-east-1", model_id: str = "us.amazon.nova-premier-v1:0"):
        self.config = Config(read_timeout=300)
        self.client = boto3.client("bedrock-runtime", config=self.config, region_name=region_name)
        self.model_id = model_id
        self.continue_prompt = {
            "role": "user",
            "content": [{"text": "Continue the code conversion from where you left off."}]
        }
    
    def run_converse_inference_with_continuation(self, prompt: str, system_prompt: str) -> List[str]:
        """Run inference with continuation handling for large outputs"""
        ans_list = []
        messages = [{"role": "user", "content": [{"text": prompt}]}]
        
        response, stop = self.generate_conversation([{'text': system_prompt}], messages)
        ans = response['output']['message']['content'][0]['text']
        ans_list.append(ans)
        
        while stop == "max_tokens":
            logger.info("Response truncated, continuing generation...")
            messages.append(response['output']['message'])
            messages.append(self.continue_prompt)
            
            # Extract last few lines for continuation context
            sec_last_line = '\n'.join(ans.rsplit('\n', 3)[1:-1]).strip()
            messages.append({"role": "assistant", "content": [{"text": sec_last_line}]})
            
            response, stop = self.generate_conversation([{'text': system_prompt}], messages)
            ans = response['output']['message']['content'][0]['text']
            del messages[-1]  # Remove the prefill message
            ans_list.append(ans)
        
        return ans_list

Continuation strategy details

The continuation strategy consists of the following steps:

  1. Response monitoring:
    1. The system monitors the stopReason field in Amazon Bedrock responses.
    2. When stopReason equals max_tokens, continuation is triggered automatically. This makes sure no code generation is lost due to token limitations.
  2. Context preservation:
    1. The system extracts the last few lines of generated code as continuation context.
    2. It uses text prefilling to maintain code structure and formatting. It preserves variable names, function signatures, and code patterns across continuations.
  3. Response stitching:
def stitch_output(self, prompt: str, system_prompt: str, tag: str = "java") -> str:
    """Stitch together multiple responses and extract content within specified tags"""
    ans_list = self.run_converse_inference_with_continuation(prompt, system_prompt)
    
    if len(ans_list) == 1:
        final_ans = ans_list[0]
    else:
        final_ans = ans_list[0]
        for i in range(1, len(ans_list)):
            # Seamlessly combine responses by removing overlap
            final_ans = final_ans.rsplit('\n', 1)[0] + ans_list[i]
    
    # Extract content within specified tags (java, xml, etc.)
    if f'<{tag}>' in final_ans and f'</{tag}>' in final_ans:
        final_ans = final_ans.split(f'<{tag}>')[-1].split(f'</{tag}>')[0].strip()
    
    return final_ans

Optimizing conversion quality

Through our experiments, we identified several factors that significantly impact conversion quality:

  • File size management – Files with more than 300 lines of code benefit from being broken into smaller logical units before conversion.
  • Focused conversion – Converting different file types (C, header, DBIO) separately yields better results as each file type has distinct conversion patterns. During conversion, C functions are transformed into Java methods within classes, and C structs become Java classes. However, because files are converted individually without cross-file context, achieving optimal object-oriented design might require human intervention to consolidate related functionality, establish proper class hierarchies, and facilitate appropriate encapsulation across the converted code base.
  • Iterative refinement – Multiple feedback loops (4–5 iterations) produce more comprehensive conversions.
  • Role assignment – Assigning the model a specific role (senior software developer) improves output quality.
  • Detailed instructions – Providing specific transformation rules for common patterns improves consistency.

Assumptions

This migration strategy makes the following key assumptions:

  • Code quality – Legacy C code follows reasonable coding practices with discernible structure. Obfuscated or poorly structured code might require preprocessing before automated conversion.
  • Scope limitations – This approach targets business logic conversion rather than low-level system code. C code with hardware interactions or platform-specific features might require manual intervention.
  • Test coverage – Comprehensive test cases exist for the legacy application to validate functional equivalence after migration. Without adequate tests, additional validation steps are necessary.
  • Domain knowledge – Although the agentic workflow reduces the need for expertise in both C and Java, access to subject matter experts who understand the business domain is required to validate preservation of critical business logic.
  • Phased migration – The approach assumes an incremental migration strategy is acceptable, where components can be converted and validated individually rather than a full project level migration.

Results and performance

To evaluate the effectiveness of our migration approach powered by Amazon Nova Premier, we measured performance across enterprise-grade code bases representing typical customer scenarios. Our assessment focused on two success factors: structural completeness (preservation of all business logic and functions) and framework compliance (adherence to Spring Boot best practices and conventions).

Migration accuracy by code base complexity

The agentic workflow demonstrated varying effectiveness based on file complexity, with all results validated by subject matter experts. The following table summarizes the results.

File Size Category Structural Completeness Framework Compliance Average Processing Time
Small (0–300 lines) 93% 100% 30 –40 seconds
Medium (300–700 lines) 81%* 91%* 7 minutes
Large (more than 700 lines) 62%* 84%* 21 minutes

*After multiple feedback cycles

Key insights for enterprise adoption

These results reveal an important pattern: the agentic approach excels at handling the bulk of migration work (small to medium files) while still providing significant value for complex files that require human oversight. This creates a hybrid approach where AI handles routine conversions and security assessments, and developers focus on integration and architectural decisions.

Conclusion

Our solution demonstrates that the Amazon Bedrock Converse API with Amazon Nova Premier, when implemented within an agentic workflow, can effectively convert legacy C code to modern Java/Spring framework code. The approach handles complex code structures, manages token limitations, and produces high-quality conversions with minimal human intervention. The solution breaks down the conversion process into specialized agent roles, implements robust feedback loops, and handles token limitations through continuation techniques. This approach accelerates the migration process, improves code quality, and reduces the potential for errors. Try out the solution for your own use case, and share your feedback and questions in the comments.


About the authors

Aditya Prakash is a Senior Data Scientist at the Amazon Generative AI Innovation Center. He helps customers leverage AWS AI/ML services to solve business challenges through generative AI solutions. Specializing in code transformation, RAG systems, and multimodal applications, Aditya enables organizations to implement practical AI solutions across diverse industries.

Jihye Seo is a Senior Deep Learning Architect who specializes in designing and implementing generative AI solutions. Her expertise spans model optimization, distributed training, RAG systems, AI agent development, and real-time data pipeline construction across manufacturing, healthcare, gaming, and e-commerce sectors. As an AI/ML consultant, Jihye has delivered production-ready solutions for clients, including smart factory control systems, predictive maintenance platforms, demand forecasting models, recommendation engines, and MLOps frameworks

Yash Shah is a Science Manager in the AWS Generative AI Innovation Center. He and his team of applied scientists, architects and engineers work on a range of machine learning use cases from healthcare, sports, automotive and manufacturing, helping customers realize art of the possible with GenAI. Yash is a graduate of Purdue University, specializing in human factors and statistics. Outside of work, Yash enjoys photography, hiking and cooking.