AWS Partner Network (APN) Blog

Innovations in Backup and Restore, AWS Lambda Monitoring, and Natural Language Generation

Kicking off the new year, we see many Amazon Web Services (AWS) Partners doing great things and innovating on the AWS Cloud. Veeam is bringing backup technology to the cloud; while IOpipe helps you get better insight into your AWS Lambda functions; and Narrative Science uses machine learning to generate auto reports from your big data.

These are just a few of the innovations being driven by members of the AWS Partner Network (APN), the global partner program for AWS that is focused on helping APN Partners build successful AWS-based businesses or solutions.

As an APN Partner, organizations receive business, technical, sales, and marketing resources to help you grow your business and better support your customers.

See all the benefits of being an APN Partner >>

Veeam: Bringing Backup Technology to the AWS Cloud

By Sam Khidhir, Solutions Architect at AWS

Veeam LogoOne of the challenges with migrating to the cloud is finding a backup strategy that works for servers that are not physically present in datacenters. How do you perform full-server backups of virtual instances? How do you do it efficiently, securely, and on a schedule that matches your business requirements?

Through Veeam, an AWS Advanced Technology Partner, customers can leverage the Veeam Backup & Replication data protection solution to protect Amazon Elastic Compute Cloud (Amazon EC2) instances regardless of Region or Availability Zone.

Customers using VMware Cloud on AWS also have full support (as of version 9.5 Update 3) for migrated VMware instances, providing a single solution to protect both virtual and on-premises workloads running on VMware vSphere.

Regardless of where data is stored, organizations get direct access to Veeam’s suite of powerful features like file-level recovery, change block tracking, and innovative scheduling options, such as reverse incremental backups where the most recent backup will always be a full backup.

In addition, data stored in Veeam supports both deduplication and compression so you can optimize storage cost versus performance. There are no special filesystem or hardware requirements for the storage used.

To ensure backup activity does not negatively impact network bandwidth, Veeam has the ability to configure network throttling rules to limit transfer rates. For out-of-region backups where bandwidth may be even more limited, Veeam can optimize WAN connections using a special WAN Accelerator service.

Regardless of how data is transferred, the security of your data is paramount. All backups are encrypted both at-rest and in-flight. Data transfers are done via TLS with AES-256, and the underlying block storage files are encrypted as soon as they arrive at the backup server, once more using AES-256.

Veeam System Recovery

Figure 1 – Veem offers a range of powerful explorers to visualize your backups.

For customers looking for high durability and low-cost object storage as an offsite backup of their on-premise storage, Veeam also provides an easy-to-implement interface to the Amazon Storage Gateway.

This tool enables organizations to eliminate legacy backup solutions like tape, by providing seamless access to Amazon S3 Standard – Infrequent Access (Standard – IA) and Amazon Glacier, making it easy and cost effective to move long term archive into the cloud.

Learn more and request a demo on the Veeam website >>


IOpipe: Get Better Insight into Your AWS Lambda Functions

By Ian Scofield, Partner Solutions Architect at AWS

IOpipe LogoOrganizations have started building more and more serverless applications on top of AWS due to a large number of benefits, ranging from the reduced operational overhead to the ability to scale effortlessly. As with any application, though, it’s important you have monitoring to ensure it is performing as expected.

Monitoring an application whose compute architecture runs on AWS Lambda is different than one that is running on an Amazon EC2 instance. Metrics such as function duration and memory usage allow you to “right-size” your functions and establish performance baselines.

Amazon CloudWatch provides logging and monitoring capabilities for your Lambda functions, but IOpipe, an AWS Advanced Technology Partner with AWS Lambda Service Delivery Designation, provides a solution that extends the functionality provided by CloudWatch by giving you even more visibility into the execution of your Lambda functions.

Developers can integrate IOpipe into Lambda functions by adding their open source module, which currently supports Node.js and Python execution environments. Once added, it sends telemetry and other detailed information back to the IOpipe service, which gives you the ability to monitor the performance of the function in near real-time. It also provides you with common information such as memory consumed, duration, and number of invocations, and reports cold starts which are not currently exposed in CloudWatch.

In addition to the metrics reported on your functions, IOpipe ingests execution traces, errors, and custom metrics to give you a complete picture of how your application is performing. This information is then aggregated, allowing you to search through millions, even billions, of invocations quickly for things like “Find invocations that have errors in the last 15 minutes,” or “Find invocations that were a cold start,” or “Find invocations for a function that were slower than expected.”

You can combine search parameters to answer even more specific situations like “Show me the invocations from user Iceman where the write to DynamoDB is greater than 1000ms.”

Capturing the metrics concerning your functions is important, but knowing what’s going on inside and how you can further optimize your code is even more important. Due to Lambda’s billing model of 100ms blocks, shaving off time has performance implications and financial ones as well. IOpipe recently launched a new feature which allows you to profile your Node.js functions. Profiling allows you to analyze how your code is performing and helps you identify areas that can be optimized by stepping through line by line.

As you can see in Figure 2, profiling allows you to analyze how your code is performing and helps you identify areas that can be optimized by stepping through line by line and seeing the impact on the overall function.

IOpipe Profiling

Figure 2 – A flame graph generated by using the profile data of an AWS Lambda Invocation.

IOpipe makes it significantly easier to debug production applications and squeeze every last bit of performance out of your AWS Lambda functions.

Visit the IOpipe website to learn more >>


Narrative Science: Using Natural Language Generation to Automatically Build Reports from Data

By Pratap Ramamurthy, Partner Solutions Architect at AWS

Narrative Science LogoA picture is worth a thousand words, and AWS Advanced Technology Partner Narrative Science takes this quite literally. Imagine you have a graph or chart representing a dataset, but its meaning is not obvious at first.

Narrative Science, which holds the AWS Machine Learning Competency, provides Natural Language Generation (NLG) software in a platform called Quill, or by integrating narratives to other products or platforms via their Dynamic Narratives offering. Dynamic Narratives can help by adding a narrative, in English, to this chart based on insights automatically derived from the data.

This, and other use cases like it, is inspiring rapid adoption of NLG. A good example is business intelligence (BI) software, which companies use to derive insights from data by generating charts and graphs. The financial services industry deals with enormous amounts of data that needs to be summarized in reports. Generating these reports can be time consuming but could also be automated using NLG.

Moreover, some financial compliance implications can be solved with transparent and consistent reporting that is traceable back to the system of record.

Another example is credit card customers who want to gain insights on their monthly transactions. It is near impossible for a human to write a personalized report for each and every customer. Instead, this can be automated using NLG to save both time and money while also creating value for the end-customer.

Not all NLG technologies are the same, however. NLGs can be broadly classified into three categories:

  • Basic NLG: This technology converts numbers into text or takes names from a database and inserts them into an email body.
  • Templatized NLG: Here, the user is responsible for writing templates, determining how to join ideas and interpreting the output.
  • Advanced NLG: Advanced NLG communicates the way humans do—assessing the data to identify what is important and interesting to a specific audience, then automatically transforming those insights into intelligent narratives. The result is insightful communications packed with audience-relevant information, written in conversational language.

Narrative Science’s Quill is an advanced NLG technology that provides realistic, relevant, and accurate narratives. Quill is flexible in ways that allows you fit narratives to your needs. For example, you can choose either a paragraph narrative or bulleted list, short, or verbose narrative. Figure 3 shows sample data for a narrative generated for a bar chart in Tableau.

Narratives for Tableau

Figure 3 – Sample data for a narrative generated for a bar chart in Tableau.

Quill is a SaaS platform that runs on AWS, and Dynamic Narratives can be accessed through the Narrative Science API. You can send data in JSON, XML, or CSV, and the response will contain the narrative text. The time taken depends on the amount of text sent, but it is usually near-instant.

Narrative Science can also integrate with third-party software and applications via their Dynamic Narratives offering. Dynamic Narratives leverages the core authoring engine of Quill to integrate with data and analytics platforms such as Tableau, Microstrategy, and Qlik.

Going beyond partnering with visualization tools, Narrative Science has partnered with Sisense Everywhere, a BI tool, to create a conversational chat bot that can be accessed using an Amazon Echo device. With this feature, you can ask questions about a dataset that is received by Sisense and uses Narrative Science to generate an answer. See a demo of how Dynamic Narratives can be embedded into other data and analytics platforms via API.

You can access a trial of the Dynamic Narratives API, or use it through your favorite BI tool that is already integrated.

Check out the Narrative Science website to learn more >>