AWS Public Sector Blog
Generative AI in EdTech: 5 pitfalls to avoid for long-term success
Today’s education technology (EdTech) leaders face intense pressure to “do something with AI.” Whether that pressure comes from boards, investors, or Wall Street, the message is clear: develop an AI strategy now. Fear of missing out has created a mentality where companies rush out AI features without understanding what problems they’re solving for their users.
At Amazon Web Services (AWS), we work closely with EdTech leaders navigating the generative AI landscape, and we’ve seen these patterns emerge repeatedly. The initial rush was understandable, as there was a feeling that whoever moved first would win AI— but it’s important to note that, in the words of Amazon CEO Andy Jassy, “we’re still at the relative beginning.” While there’s still time to breathe, it’s also time to move decisively but deliberately—starting now. The best generative AI strategies aren’t reactive; they’re purposeful, grounded, and user-centric.
Based on our experience collaborating with EdTech organizations across the education spectrum, we’ve created a list of five common pitfalls that derail generative AI efforts and the strategic approaches we recommend to avoid them.
Pitfall 1: Building a solution in search of a problem
As mentioned earlier, teams often quickly build generative AI features just to say they have them. Instead of working backwards from customer needs, companies work forwards from the technology. This mindset results in products like chatbots that are forced into user workflows. Companies achieve a short-term win by launching an AI feature, but they don’t see the user adoption they were hoping for.
Avoid this pitfall by starting with customers and working backwards from their pain points, whether external buyers or internal stakeholders who could benefit from AI-powered assistance. Broaden your definition of “customer” to include internal groups like HR, curriculum development, or operations teams. For example, if your company is spending significant time manually adapting science curriculum for different state standards, that’s a measurable problem AI might solve.
Break down silos between product, engineering, sales, and marketing to define aligned use cases. Move past recency bias (the tendency to overemphasize the most recent trends or ideas) and toward systematic prioritization of use cases based on the voice of the customer and real ROI drivers.
Pitfall 2: Building the top floor first
Teams jump into generative AI without investing in the foundation: data infrastructure. Differentiation with generative AI models depends heavily on unique data. The fundamentals of an AI data strategy are built on identifying the necessary data, ensuring the models can utilize that data, and implementing governance around quality, security, and access to ensure the right people have the right data when they need it to deliver the best results. Often, data cleanup is considered technical debt—less appealing for funding than flashy AI tools. In our experience advising EdTech companies, this foundational work is critical for long-term success.
Avoid this pitfall by defining your data sources and how you’ll ingest, clean, and store data for usability. Confirm data quality through robust analysis—one university’s AI tool broke because one database used “terms” another used “semesters.” Establish data governance policies around access, privacy, and compliance. Build modern data architecture as part of your AI strategy, allowing you to advance both at the same time.
Pitfall 3: No way to measure “good”
Eager to see what a proof of concept can do, teams skip defining evaluation metrics up front and then get stuck debating whether solutions are “good enough.” If the model output is accurate 80 percent of the time, is that good enough for production? It depends.
Consider two real examples from our EdTech partners: One customer uses generative AI to create illustrated books from third-grader stories. They are less concerned about the precise accuracy of dragon illustrations (e.g., if a dragon has three arms instead of two), but they care deeply about ensuring no inappropriate content appears (e.g., no profanity or depictions of smoking). Another customer uses generative AI to grade high-stakes assessments for secondary students. Here, accuracy becomes paramount.
Avoid this pitfall by defining metrics upfront based on delivering value, preventing harm, and protecting your brand. Choose evaluation methods like expert review, user feedback, or AI-as-a-judge approaches.
When teams skip this step, promising prototypes sit on shelves while teams argue about subjective quality measures. Rather than an objective discussion on assessing outputs, amorphous debate stalls progress. In our work, we’ve seen customers with promising tools that still can’t move ahead because they never defined what “accurate enough” means for their specific use case.
Pitfall 4: Not planning for success
You build something that works—maybe a tool to improve internal efficiency, like curriculum alignment automation or student progress report generation—but never planned to scale it. Suddenly, you face unexpected costs, resource needs, or integration challenges. This pitfall is more tactical than the others, but equally important. Teams often focus so intensely on proving the concept works that they forget to plan for what happens when it does.
Avoid this pitfall by developing go-to-market plans early: Will this be monetized? Is it more of a customer retention play? Is the solution designed to improve internal productivity or efficiency, and if so, how will this be measured? Understanding how the solution fits into a sustainable business model helps you make informed decisions about pricing, customer access, and resource allocation. Create roadmaps for scaling infrastructure and operational support. Consider whether you have the engineering resources to maintain the solution or if it needs to pay for itself through efficiency gains.
When tracking success of internal efficiency tools, start by assigning measurable values to manual processes. If you’re automating curriculum adaptation for 50 different state standards, for example, calculate what that manual work currently costs in staff time and opportunity cost. By measuring the baseline inefficiency first, you create a clear before-and-after picture that demonstrates the impact of your AI solution. For internal tools, remember that staff freed up from manual tasks can shift to higher-value, mission-focused activities that deliver greater returns for your company.
Pitfall 5: Building in silos
When generative AI is driven by one team, results often lack customer insight, feasibility, or organizational alignment. This creates projects that either can’t be delivered (i.e., not technically feasible with current resources) or don’t provide value because they’re separated from real customer needs.
Avoid this pitfall by creating a mechanism for conversations to happen across departments. Whether you call it a center of excellence, an AI steering committee, or something else, you need someone responsible for getting multiple stakeholders involved as you build these solutions. Start with representation from technology, sales, and product departments as your baseline and consider building from there—no single person has the full view of what customers need, what the product can do, and what your development capabilities are. So encourage debate across departments, and make sure all teams get hands-on time with generative AI tools so they understand what these systems can and can’t do.
When someone drives a project forward alone, you can end up with promises to customers that can’t be delivered or solutions that work but sit on the shelf because they’re disconnected from real customer needs. The key is bringing different business units together to combine those different perspectives and find the right path forward.
Developing your strategic roadmap with AWS
The best generative AI strategies are proactive and drive ROI. Avoiding the five pitfalls we discussed in this post can help you build solutions that matter to your business and your users. Amazon Bedrock can help you assess AI readiness, plan use cases, and build responsibly with tools like pretrained models, continuous monitoring, and guardrails.
Ready to develop your AI strategy? Learn more about how we can help you avoid these common pitfalls and develop a strategic approach to generative AI.