Category: Nonprofit


The Evolution of High Performance Computing

A guest blog by Jeff Layton, Principal Tech, AWS Public Sector

The High Performance Computing (HPC) world is evolving rapidly. New workloads, such as pattern recognition, speech, video, and text processing, speech and facial recognition, deep learning, machine learning, and genomic sequencing, are being executed on HPC systems. The main motivation behind this evolution is economic and technical. As HPC systems became more powerful, agile, and less costly, they can be used for applications that have never had access to high scale, low cost infrastructure.

The cloud has accelerated this evolution because it is scalable and elastic, allowing self-service provisioning of one to thousands of processors in minutes. As a result, HPC users are coming to AWS with new and expanding application requirements and are seeing reduced time-to-results, faster speed to deployment, greater architectural flexibility, and reduced costs. Cloud computing is pushing HPC at the pace of computing innovation as users benefit from advances in microprocessors, GPUs, networking, and storage.

The cloud and the evolving HPC world

The HPC world has a need for more processing capability, which is driving HPC system development. The current HPC architecture, the cluster, was created for a common architecture and operating system that had price-performance benefits far beyond proprietary systems. Clusters with commodity processors were then doing production work for a number of companies and labs, which led to the explosion of clusters in HPC.

Clusters have come a long way and have greatly increased access to HPC resources at an affordable price. This includes both embarrassingly parallel applications and tightly coupled applications.

Issues with traditional HPC fixed architectures

The HPC cluster architecture is a relatively fixed architecture with a set of servers (nodes). Each server has a small amount of internal storage (if any at all), connected by a dedicated network, using software tools to manage user requests for resources. It is rare for any changes to be made to the system, such as adding nodes, processor upgrades, additional node storage, network topology, or technology changes. Once put in place, the vast majority of dedicated cluster systems never change architecture.

The rise of the Hadoop architecture, which addresses a large class of HPC problems, makes this inflexibility an even greater challenge. The Hadoop architecture (also known as the Map-Reduce architecture) calls for nodes with a lot of local storage and only uses TCP networks. The typical on-premises HPC system uses the smallest, least expensive, but reliable drive in each node. For Hadoop workloads, customers often procure a separate system specifically designed for Hadoop workloads. Employing this strategy would create two HPC architectures with conflicting configurations. However, this is unnecessary when cloud computing is the platform, as both rely on commodity systems, dynamically created clusters, and software stacks that are purpose-built for the needs of particular problems.

The cloud allows you to go beyond thinking that HPC is only about clusters and that all applications must adapt that model. If you have a new architecture in mind for your application or your workflow, you can simply and easily create it in the cloud.

Do you want to use a combination of containers and microservices for your application? The AWS Cloud allows you to construct what you need with some very simple code. If the architecture doesn’t work as well as you wanted, then you just turn off the system and stop paying for it.

Learn more about HPC with AWS in this video below.

In future blogs, I’ll discuss some of the pain points of HPC beyond architectural rigidity and how the cloud addresses them. Stay tuned! In the meantime, learn more about HPC with AWS here: https://aws.amazon.com/hpc/

Cloud-Enabled Innovation in Personalized Medical Treatment

Hundreds of thousands of organizations around the world have joined AWS and many are using AWS solutions powered by Intel to build their businesses, scale their operations, and harness their technological innovations. We’re excited about our work with the hospitals and research institutions using bioinformatics to achieve major healthcare breakthroughs and unlock the mysteries of the human body.

These organizations are revolutionizing our understanding of disease and developing novel approaches to diagnosis and treatment. A human genome contains a complete copy of the genetic material necessary to build and maintain that organism. The sequencing of this code represents one of history’s largest scientific endeavors—and greatest accomplishments. When the Human Genome Project began in 1990, researchers had only a rudimentary understanding of DNA and the details of the human genome sequence. It took around 13 years and cost roughly $3 billion to sequence the first genome. But today, even small research groups can complete genomic sequencing in a matter of hours at a fraction of that cost.

The parallel evolution of genomics and cloud computing over the past decade has launched a revolution in discovery-based research that is transforming modern medicine. Doctors and researchers are now able to more accurately identify rare inherited and chromosomal disorders, and develop highly personalized treatment plans that reflect the unique genetic makeup of individual patients.

This eBook highlights the important work bioinformatics organizations are undertaking and explains how we are helping them achieve their mission. The stories of these four organizations illustrate what is possible with the AWS Cloud:

  1. The National Institute of Health’s Human Microbiome Project (HMP) – Researchers from all over the globe can now access HMP data through Nephele, an AWS-supported platform, and use that information to identify possible microbial causes of preterm births, diabetes, inflammatory bowel syndrome, and other disorders.
  2. The INOVA Translational Medicine Institute (ITMI) – AWS architecture facilitates the storage and management of this secure data, and enables Inova researchers to develop personalized treatments and predictive care for newborns suffering from congenital disorders and patients of all ages with cancer-causing genetic mutations.
  3. University of California, San Diego’s  Center for Computational Biology & Bioinformatics (CCBB) – CCBB has seven core AWS-supported analysis pipelines, all optimized to handle next-generation sequencing data. Each pipeline is targeted at identifying small but important molecular differences, whether in a tumor’s DNA or in the microbiome, enabling doctors to tailor treatment on an individual level.
  4. GenomeNext – GenomeNext’s AWS based platform represents the newest technological benchmark in the history of genomic analysis, and allows even small research groups to complete genomic sequencing in a matter of hours at a fraction of the traditional cost.

Medical and scientific communities around the world are just starting to take advantage of the transformative opportunities that personalized genomic medicine offers patients. These organizations are at the forefront of that medical revolution. Download the eBook to learn more and check out the infographic below to see how the cloud transforms healthcare.

Going “All-In” on AWS: Lessons Learned from Cloud Pioneers

Increasingly, customers across the public sector are going “all-in” on the AWS Cloud. Instead of asking whether to go all-in, these pioneers asked how quickly they could do it.  Balancing the need to improve efficiency, stay agile, and meet mandates, government and education customers are committing to going all-in on AWS, meaning they have declared that AWS is their strategic cloud platform.

At last year’s re:Invent, we were lucky enough to hear from Mike Chapple, Notre Dame; VJ Rao, National Democratic Institute; Eric Geiger, Federal Home Loan Bank of Chicago; and Roland Oberdorfer, Singapore Post eCommerce on a panel sharing insights into their decision-making about moving all-in or cloud-first, the value they’ve seen, and the impact to the mission.

Success can be contagious

All of these organizations are at various stages on their journey to the cloud. For example, Notre Dame is a year in with a third of their services migrated, whereas, Federal Home Loan Bank is three years in and recently unplugged their last piece of on premises infrastructure. No matter the stage, they all have similar experiences, lessons learned, and a shared goal—the cloud.

After initial successes with pilot projects, such as websites or testing environments, IT teams within these organizations saw the possibilities and savings with AWS and decided to migrate more of their infrastructure to AWS. Whether it was cost savings or scalability, these quick wins showed business value and a compelling case to bring other services to the cloud.

“Look for things that are as straightforward as possible to guarantee success,” advised Mike Chapple, Sr. Director for IT Service Delivery, Notre Dame.

The feeling of success can be contagious, and because of the initial success, each of these organizations wanted to do more and more. They took the time to carefully and thoughtfully design their infrastructure or “data center in the cloud” with an AWS Solutions Architect. Getting serious from the start paid off in the long run.

They may have begun the journey by wanting to lower costs, but they continued on the journey leveraging the cloud because of the possibilities available. No longer are they constricted by budgets, scale, and compute.

Tidbits of advice on the journey

Since adopting the all-in strategy, these organizations are now realizing what is possible with the power of the cloud. But gaining buy-in was not always easy. The panel mentioned they could prove security, they can encrypt data in flight and they can encrypt at rest, but surprisingly, the biggest push back came from their own staff.

With some universities and business, tradition runs deep, and that was the case with Notre Dame, a 175-year-old institution. So going all-in on AWS required more than just initial success with a few little projects. It required storytelling, training, and education. “One of the things we’ve learned along the way is the culture change that is needed to bring people along on that cloud journey and really transforming the organization, not only convincing that the technology is the right way to go, but winning over the hearts and minds of the team to completely change direction,” Mike Chapple said.

Change happens and the cloud is the natural evolution of IT. These teams did a lot of storytelling and mapping out that this is the next logical step to move from a virtualized on premises environment to a virtualized environment in the cloud. They planned early, trusted their instincts, and told the cloud story.

Watch this panel discussion and don’t miss out on the chance to hear from some other customers who have gone all-in with AWS at one of our upcoming Summits in Chicago, New York, Washington DC, and Santa Clara. Learn more about these events here and register for the AWS Public Sector Summit on June 20-21 here.

OSET Foundation Using AWS to Advance Cloud-Based Election Innovations

We are pleased to announce that the Open Source Election Technology (OSET) Foundation’s TrustTheVote™ Project is utilizing AWS to ensure that the democratic process is not threatened by archaic and obsolete systems. Often, these systems are no longer supported by manufacturers, and in the case of voting machinery, rely on proprietary software that’s difficult to inspect or audit.

OSET is a 501(c)(3) non-profit election technology research institute focused on creating open source software for elections administration in the US and around the world. The TrustTheVote™ Project is an open source software initiative that develops and provides an election technology framework with apps. States and counties can then adopt, adapt, and deploy the software to administer elections. Currently, OSET offers apps for states’ online voter registration services, with ballot design and election results reporting in development and testing. More apps are on the way for all aspects of election administration. The cloud-driven open source approach means that any election jurisdiction can adopt and adapt OSET’s apps and launch them faster and more cost effectively than ever before.

Election officials have to deal with aging hardware, shrinking budgets, and inefficient processes, while managing chaotic election logistics, polling place volunteers, and local web sites that often crash when everyone simultaneously wants election results. Cloud technology, combined with open data, open standards, and open source development, offers an ideal solution for elections administration with no hardware to buy or maintain, unlimited capacity for traffic, and a pay-as-you-go model.

The OSET Foundation is driving increased innovation in elections technology, like voter registration services, ballot creation, election results reporting, analytics, and voter information services with zero-footprint data center solutions that were not possible with traditional IT infrastructure. Since it’s on AWS GovCloud (US), the TrustTheVote Project technology can be used by any state or county looking to quickly improve elections administration without the high costs and long time frames of old computer systems.

OSET chose to make its software available on AWS GovCloud (US), because it offers the security and compliance for sensitive data, while having the scalability, agility, and cost savings of not buying hardware. And it can be quickly and easily delivered anywhere in the country.

Cloud-based voter registration, ballot design, and elections results reporting are ideal starting points to lowering costs and improving the public trust in our democracy.

Announcing AWS Cloud Credits for Nonprofits with TechSoup Global

AWS is excited to announce that we are partnering with TechSoup Global (TSG), a nonprofit technology network that connects nonprofit organizations with discounted or donated tech products and services, to provide AWS credits to nonprofits. Through the AWS Credit Program, nonprofits get access to selected, packaged AWS Cloud services.

From issue advocacy to charitable causes, from health and welfare to wildlife, over 17,500 nonprofit organizations are already using AWS to radically reduce infrastructure costs, build their capacity, and reduce waste. With AWS, nonprofits don’t have to make large upfront investments in hardware and spend time and effort managing that hardware.

Instead, organizations can provision exactly the computing resources they need to power their organizations and keep focus on the mission. Our expansive technology platform allows nonprofits of all sizes to run lean and frees them to be fast, agile, and even global, while still being efficient with IT spend, paying only for what they use.

Lack of access to the most up-to-date IT infrastructure services should not stand in the way of nonprofits from accomplishing their mission. Through the Nonprofit Credit Program with TechSoup Global, AWS enables nonprofits to accomplish their missions by helping to subsidize their infrastructure costs. Eligible nonprofits will receive $2,000 worth of AWS service credits annually to be used towards Amazon WorkDocs, Amazon WorkMail, and Amazon WorkSpaces.

What Do You Get?

The AWS Nonprofit Credit Program at TechSoup provides support to eligible nonprofits, charities, and public libraries throughout the United States by providing promotional credits to subsidize cloud infrastructure costs. To access your credits, go to: www.techsoup.org/amazon-web-services

“Being able to announce our partnership with AWS means together we are able to meet a very specific need for the nonprofit community. By bringing this generous offer to the sector, AWS is enabling nonprofits to benefit from cloud-based solutions with top tier support. This program goes a long way toward helping TechSoup deliver on its promise to connect changemakers worldwide with the technology they need to improve lives,” said Gayle Samuelson Carpentier, Chief Business Development Officer, TechSoup.

The AWS Nonprofit Credit Program with TechSoup is now available in many countries. Read about the global expansion update here.

Save the Date for the AWS Public Sector Summit in Washington DC- June 20-21, 2016

We are excited to announce our seventh annual AWS Public Sector Summit scheduled for June 20-21, 2016 in Washington, DC.

Join us for one of the largest gatherings of government, education and nonprofit technology leaders sharing their firsthand stories of innovation for the public good. Last year’s summit featured a star line up including the CIOs of the US, UK, Canada, and Singapore, as well as IT leaders from agencies and organizations near and far.

This year, we are excited to welcome Andy Jassy, Amazon SVP and the visionary leader of Amazon Web Services, along with even more customers innovating in mobility, the Internet of Things, scientific computing, advanced security, open data, and more. The Summit includes over 80 breakout sessions, direct access to AWS technologists and inspiring customer spotlights.

Check out this video to get a flavor for the event.

Register now for the AWS Public Sector Summit here!

A Practical Guide to Cloud Migration

To achieve full benefits of moving applications to the AWS platform, it is critical to design a cloud migration model that delivers optimal cost efficiency. This includes establishing a compelling business case, acquiring new skills within the IT organization, implementing new business processes, and defining the application migration methodology to transform your business model from a traditional on-premises computing platform to a cloud infrastructure.

A Practical Guide to Cloud Migration: Migrating Services to AWS white paper coauthored by AWS’s Blake Chism and Carina Veksler provides a high-level overview of the cloud migration process and is a great first read for customers who are thinking about cloud adoption.

The path to the cloud is a journey to business results. AWS has helped hundreds of customers, such as City of McKinney, TX and Georgetown University, achieve their business goals at every stage of their journey. While every organization’s path will be unique, there are common patterns, approaches, and best practices that can be implemented to streamline the process.

  1. Define your approach to cloud computing from business case to strategy and change management to technology.
  2. Build a solid foundation for your enterprise workloads on AWS by assessing and validating your application portfolio, and integrating your unique IT environment with solutions based on AWS cloud services.
  3. Design and optimize your business applications to be cloud-aware, taking direct advantage of the benefits of AWS services.
  4. Meet your internal and external compliance requirements by developing and implementing automated security policies and controls based on proven, validated designs.

Early planning, communication, and buy-in are essential. Understanding the forcing function (time, cost, and availability) is key and will be different for each organization. When defining the migration model, organizations must have a clear strategy, map out a realistic project timeline, and limit the number of variables and dependencies for transitioning on-premises applications to the cloud. Throughout the project, build momentum with key constituents with regular meetings to review progress and status of the migration project to keep people enthused, while also setting realistic expectations about the availability time frame.

Learn more about what it takes to migrate to the cloud in this guide here.

Public Sector Customers Excited About the New AWS Region Announcements

To kick off the New Year, the AWS Worldwide Public Sector team is excited about the announcement of our new region in the Republic of Korea and the preannouncement of the Canada region last week.

The AWS Cloud operates 32 Availability Zones within 12 geographic Regions around the world, with 11 more Availability Zones and 5 more Regions coming online throughout the next year in Canada, China, India, Ohio, and the United Kingdom (see the AWS Global Infrastructure page for more info).

The region-based AWS model has proven to be a good standard for our government, education, and nonprofit customers around the globe. Due to the unique needs of these public sector organizations, we understand that it is important to exercise complete control over where your data is stored and where it is processed.

Now, Korean-based developers and organizations, as well as multinational organizations with end users in Korea, can securely store and process their data in AWS in Korea with single-digit millisecond latency across most of Korea.

Governments, multi-national corporations, and international organizations are at significant crossroads, trying to balance innovation and security. They want the elasticity, scalability, and total cost of ownership (TCO) of cloud computing, but they also must meet significant security requirements to protect data and personal privacy.

With the launch of the AWS Region on Korean soil, public sector organizations will now have the opportunity to move sensitive and mission-critical workloads to AWS.

The Seoul Region consists of two Availability Zones (AZs) at launch. Each AZ includes one or more geographically distinct datacenters, each with redundant power, networking, and connectivity. Each AZ is designed to be resilient to any issues in another AZ, enabling customers to operate production applications and databases that are more highly available, fault tolerant, and scalable than would be possible from a single datacenter.

Additionally, this investment in the Asia Pacific area will enable increased innovation and collaboration in education, nonprofits, scientific computing, and open data efforts.

Public sector customers will find the new AWS Region has services and features like AWS Identity and Access Management (IAM) and AWS Trusted Advisor that can enable secure information technology operations, whether they are managing health records, building out new digital services for citizens, or looking for new ways to collaborate with colleagues. Beyond these security services, public sector customers should also enjoy the elasticity and affordability of our compute, networking, storage, analytics, and database web services. To learn more about AWS Cloud Security, visit here.

Investing in the future of cloud

AWS is also delivering its AWS Educate Program to help promote cloud learning in the classroom with eight local universities, including Sogang University, Yonsei University, and Seoul National University. Since its launch locally in May 2015, over 1,000 Korean students have participated in AWS-related classes and nonprofit e-learning programs, such as “Like a Lion.”

With the launch of the Seoul Region, it marks the fifth AWS region in the Asia Pacific area, bringing the global total of regions to 12 (with more to come in 2016!).

For more details about this announcement, please see the official posting here.

Resilience Data Analytics Tool and the Cloud Help Humans Survive and Thrive

On the topic of resilience—the ability to withstand, respond and adjust to chronic or acute stressors— there are a lot of data sets out there on social ecological systems, human environment, stressors, shocks, natural disasters, and conflict. The challenge is these data sets are often stored in silos or confined to the academic community.

However, if we could analyze existing data to give insights into what kind of investment and intervention countries should make, then more people could become resilient.

Resilience Atlas brings together 60 data sets for governments and scientists: 12TB of data now available

We had the opportunity to talk with Alex Zvoleff, Director of Data Science, and Sandy Andelman, Chief Scientist at Conservation International.

Conservation International, with funding from the Rockefeller Foundation, released a new online tool, the Resilience Atlas. The Atlas is designed to build understanding of the extent and severity of stresses and disasters affecting rural livelihoods, production systems, and ecosystems and how different types of assets, from natural capital, to financial capital and social networks affect their ability to thrive and even transform in the face of adversity.

For the first time, data from satellites, ground-based biophysical measurements and household surveys – from more than 60 of the best available data sets (including the NASA NEX data set) totaling over 12 terabytes – have been integrated, analyzed and made available in an easy-to-use map interface. By integrating these disparate data sets, the Atlas connects themes and perspectives so that people making important investment, development and security decisions can easily see the full picture.

What is the challenge you are trying to solve with the Resilience Atlas?

Sandy: In order to thrive, societies need to exhibit resilience. Evidence-based decision-making is a huge challenge in areas where data are inaccessible, and the Resilience Atlas hopes to help make essential information available in a digestible form to governments, communities, donors and businesses who are struggling to manage the risks and uncertainties associated with climate change, conflict, population growth, and other stressors.  It can provide them with insights on the magnitude of the challenge and on which kinds of interventions and investments will make a difference. By creating a system, instead of just providing the answers, people can reach their own insights with the publicly provided data.

How was the map created?

Alex: Faced with large amounts of data volume (12 TB) during work on the Atlas, we had to be able to handle the large volume, the intense computation, and be able to access the data on demand. Prior to considering the cloud to host the Atlas, a lot of the data sets were held by individual researchers. Though a few were made available in publications, they were not easy to access, not easy to crop or download, and in general a lot of the data was not publicly available or it needed specialized knowledge to decipher. Amazon Elastic Compute Cloud (Amazon EC2) spot instances were used for processing and bringing up a large fleet of servers at one time; 120 servers were used in parallel. Amazon EC2 made it possible to do in only a few days what would have taken over a month. Additionally, we have the ability to automate processes as more data sets covering different trends become of interest to the community. All of this is made possible by the cloud computing infrastructure of Amazon Web Services (AWS).

How does the map impact human life?

Sandy: The hope is that this map provides insights to how extensive and how severe the different kinds of shocks people face are in more than forty countries in the Sahel, Horn of Africa, and South and Southeast Asia. For example, with increasing climate variability or financial market shocks, the atlas can give insights into the particular systems and then to try to shift decision making to a more evidence-based approach pulling together the best data to get the complete picture. By mining data, users can understand which kinds of interventions and investments have actual evidence for their effectiveness. Another example is the Journey tool. One focuses on Ethiopian pastoralists, guiding users first to a map showing where they live and then exploring stressors they face, such as changing rainfall patterns that threaten the viability of pastoralism as a livelihood and the lack of investments in human capital, such as literacy and access information, which can hinder them to adapt or transform.

What is the goal of the Atlas in relation to governments, communities, donors and businesses?

Sandy: We can work with governments to use the Atlas as a planning tool. Officials can use the data already in the atlas or they can work with us to put their finer-scale data into the Atlas as a tool to assess resilience and to inform better investment decisions. The open access to this data gives a better understanding of important issues like climate change, flooding, droughts to more people. What was missing before was the integrated picture and more and more in the world we live in today we need a system perspective, because decisions about poverty alleviation are not independent from decisions about conservation and what kind of agriculture to invest in. The Atlas is unique because it pulls together all the different data sets that people might be familiar individually with and puts them together to give the community the full picture to make their important decisions. Determining cause and effect is complex, hence why it requires multiple disciplines and experts to look at problems from multiple angles.

What is the user experience like with the Atlas?

Alex: With the “Journeys” feature, the Atlas guides users on how to tell stories with data, enabling users to explore the specific data that are relevant to the questions they want to answer. Instead of telling them the answers, the Atlas helps them to discover the answers for themselves. It has a simple three-step approach:

  1. Select geography and system of interest to produce a map of how it is distributed.
  2. Identify the stressors and shocks affecting system, and their extent and severity.
  3. Explore what kinds of assets (such as natural, human, social, financial, and manufactured capital) might increase resilience.

Users can share their insights by sharing map links via social media or by embedding Atlas data within their own webpages. All of this data is openly available without a fee and the site supports API access for private industries, allowing a broader audience to work with the data to build other tools.

Go ahead and explore the map here.
AWS also provides public data sets, visit www.aws.amazon.com/opendata to learn more.

 

 

1776: Where Revolutions Begin

The year 1776 is celebrated in the United States as the official beginning of the country’s freedom, with the Declaration of Independence issued on July 4.

Taking this year as inspiration for its namesake, 1776 is a global incubator and seed fund helping startups transform industries that impact millions of lives every day— in the areas of education, energy and sustainability, health, transportation, and cities.

To encourage startups to envision innovative ideas, 1776 created the Challenge Cup for the most promising, startups to share their vision on a global stage.

What is the Challenge Cup?

Each year, 1776 hosts a worldwide tournament called the Challenge Cup. Together with partners and over 50 incubator hosts around the world, 1776 will discover the most promising, highly scalable startups that are poised to solve the major challenges of our time.

Startups advance through three rounds: Local, Regional and Global Finals. All of the regional winners and a host of wild cards will be invited to participate in the Challenge Cup Global Finals next June in Washington, D.C. They will compete for over $1 million in prizes, as well as spend time with the investors, customers, media and other key connections that can help them succeed on a global scale.

The power to change the world

From the spark of an idea to the first customer to IPO and beyond, the world’s most progressive startups build and grow their businesses on Amazon Web Services (AWS).

We believe entrepreneurs have the power to change the world, and we are excited to partner with 1776 and support others who are dedicating their entrepreneurial journey to the industries that matter most to our lives — education, energy, health, transportation, food, and more. Throughout the Challenge Cup, we will provide winners of the competitions with AWS credits that can be used on eligible cloud services to help them innovate using cloud technology.

We see major opportunities for tech entrepreneurship, particularly for new businesses that need to be enabled locally, from a technology perspective. At AWS, we are committed to improving tech education around the world and want to continue to fuel talent and trained resources. We need to create the right environment for mentorship, between individuals and between businesses. Together, we can bring the right tools, technology, and training to reinvent the business ecosystem with cloud computing technology that allows for economic growth and world-changing outcomes.

We agree with 1776 that the Challenge Cup is much more than a competition — it’s a movement of startups bringing world-changing ideas to life. By working together, we can unleash the creative power of collaboration and technology.

“Our partners are part of this global convening of entrepreneurs and are an integral part of the Challenge Cup. By making it possible for startups to build solutions with minimal capital costs, Amazon Web Service has been a powerful catalyst to the explosion in startup activity around the world. AWS is committed to supporting startups that are impacting essential human needs and we are thrilled to have them be a part of this year’s tournament as one of our global partners,” Evan Burfield, 1776 co-founder said.

From D.C. to Nairobi to Singapore, we can’t wait to see what ideas these startups from around the world will be bring to the competition. Follow the action at #1776Challenge. Looking to attend an event? Register here.