Monday, February 22, 2016

AWS Week in Review February 15, 2016

Lets take a quick look at what happened in AWS-land last week:

















































Monday

February 15




Tuesday

February 16




Wednesday

February 17




Thursday

February 18




Friday

February 19




Saturday

February 20




Sunday

February 21



New & Notable Open Source



  • aws-api-gateway-for-cloudformation is a set of Custom Resources that enables API Gateway support for CloudFormation.

  • WaterFlow is a is a non-magical / easy to understand / JDK8 framework for use with Amazon Simple Workflow.

  • gnu-mailman-aws documents the process of installing and running GNU Mailman on an EC2 instance.

  • spa-aws is a single page application using AWS Lambda.

  • petit is a URL shortener written in Ruby for use on AWS.

  • state-of-cloud is a tool to inventory cloud and report of the use of AWS and other services.

  • aws-helpers is a set of Node.JS helpers for AWS.

  • aws-lambda-canary is a canary project for Lambda services on AWS.

  • cookiecutter-lambder is a cookiecutter template for creating Lambder projects.

  • aws provides Racket support for AWS projects.


New SlideShare Presentations



New Customer Success Stories



New YouTube Videos



Upcoming Events



Help Wanted



Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.


Jeff;



Friday, February 19, 2016

Yi Action Camera Budget Action Cam

According to a survey conducted by co-op.kinja.com, the most recommended budget action camera among users is the Yi Action Camera. Rated 4.5 out 5 stars, this little device earned several great reviews on Amazon. It stands up well next to the GoPro Hero 4 which is much more expensive. For about $80, you can ... Continue reading Yi Action Camera Budget Action Cam ->

Thursday, February 18, 2016

GoPro In Trouble? Cutting Camera Lineup by Half

GoPro announced their revenues for the past quarter and according to the results, it seems that their earnings are going off a cliff. The popular camera company's revenue is $436.6 million for the past quarter which leads to a loss of $34.4 million. Also, the forecast for next quarter drops from $300 million ... Continue reading GoPro In Trouble? Cutting Camera Lineup by Half ->

Wednesday, February 17, 2016

New - Notifications for AWS CodeDeploy Events

AWS CodeDeploy is a service that helps you to deploy your code to a fleet of EC2 or on-premises instances while taking care to leave as much of the fleet online as possible. [codedeploy_u] was designed to work with fleets that range in size from a single instance all the way up to thousands of instances (see my post, New AWS Tools for Code Management and Deployment for more information).


Notifications for CodeDeploy
In order to make it easier for you to use [codedeploy_u] as a part of your overall build, test, and deployment pipeline, we are introducing a new notification system today. You can now create triggers that send Amazon SNS notifications before, during, and after the deployment process for your applications. Triggers can be set for the deployment as a whole or for the individual instances targeted by the deployment, and are sent on both successes and failures. Here is the full list of triggers:



  • DEPLOYMENT_START

  • DEPLOYMENT_SUCCESS

  • DEPLOYMENT_FAILURE

  • DEPLOYMENT_STOP

  • INSTANCE_START

  • INSTANCE_SUCCESS

  • INSTANCE_FAILURE


You can create up to 10 triggers per application. You can connect several triggers to a single topic, or you can send each trigger to a distinct topic. You can use any of the delivery protocols supported by SNS (http, https, email, SMS, and mobile push). You can also invoke a Lambda function.


Creating a Trigger
Triggers can be created using the AWS Management Console or the AWS Command Line Interface (CLI). I'll use the Console in the post. I set up the sample CodeDeploy application (three t2.micro instances):



And then I did an initial deployment:



Then I created an SNS topic, subscribed to it via email, and confirmed the subscription:




I returned to the Console and opened up my application within CodeDeploy:



I opened up IAM in another tab and updated the policy associated with the Service Role, giving it permission to write to SNS (you won't need to do this if you select the managed AWSCodeDeployRole for your application):



Back in CodeDeploy, I clicked on Create trigger, entered a name, chose my events (all of them for this example), and selected my SNS topic from the dropdown:



I could also have chosen individual events:



My trigger was created and displayed in the Console:



CodeDeploy sent a confirming message to the topic and it was at the top of my Inbox:



Then I initiated a deployment and waited for the emails to arrive! Here's what they looked like:



I used email for illustrative purposes; in a real-world application you would probably want to write some code to catch and handle the messages.


Triggers are available now and you can start using them today!

--
Jeff;




Samsung to Unveil the Samsung S7 and S7 Edge

In a little less than a week, on February 21st, Samsung is expected to unveil their newest addition to their smartphone lineup - the Samsung S7 and S7 Edge. Several leaks showed that both smartphones look similar to the Samsung S6 and S6 Edge. They will be selling it in the colors black, silver, white, ... Continue reading Samsung to Unveil the Samsung S7 and S7 Edge ->

Tuesday, February 16, 2016

Apple's USB-C Cable Replacement Program

Apple just launched a USB-C Cable Replacement Program that is available worldwide for those who purchased the MacBook. According to Apple, the limited number of cables is expected to fail due to a "design issue" which may cause intermittent charging or none at all. (Design issue? Sounds like a loose wire due to ... Continue reading Apple's USB-C Cable Replacement Program ->

Amazon EMR Update - Support for EBS Volumes, and M4 & C4 Instance Types

My colleague Abhishek Sinha wrote the guest post below to tell you about the latest additions to Amazon EMR.

--
Jeff;


Amazon EMR is a service that allows you to use distributed data processing frameworks such as Apache Hadoop, Apache Spark and Presto to process data on a managed cluster of EC2 instances.


Newer versions of EMR (3.10 and 4.x), allow you to use Amazon EBS volumes to increase the local storage of each instance. This works well with the existing set of supported instance types, and also gives you the ability to use the M4 and C4 instance types with EMR. Today I would like to tell you more about both of these features.


Increasing Instance Storage Using Amazon EBS
EMR uses the local storage of each instance for HDFS (Hadoop Distributed File System) and to store intermediate files when processing data from S3 using EMRFS. You can now use EBS volumes to extend this storage. The EBS volumes are tied to the lifecycle of the associated instances and augment any existing storage on the instance. If you terminate a cluster, any associated EBS volumes are also deleted along with it.


You will benefit from the ability to customize the storage of your EMR instances if...



  1. Your processing requirements demand a larger amount of HDFS (or local) storage than what is available by default on an instance. With support for EBS volumes, you will be able to customize the storage capacity on an instance relative to the compute capacity that the instance provides. Optimizing the storage on an instance will allow you to save costs.

  2. You want to take advantage of the latest generation EC2 instance types such as the M4, C4, and R3 and need more storage than is available on these instance types. You can now add EBS volumes to customize the storage in order to better meet your needs. If you're using the older M1 and M2 instances, you should be able to reduce costs and improve performance by moving to newer M4, C4 and R3 instances. We recommend that you benchmark your application to measure the impact on your specific workloads.


It's important to note that the EBS volumes added to an Amazon EMR cluster do not persist data after the cluster is shutdown. EMR will automatically clean up the volumes when you terminate your cluster.


Adding EBS Volumes to a Cluster
EMR currently groups the nodes in your cluster into 3 logical instance groups: a Master Group, which runs the YARN Resource Manager and the HDFS Name Node Service; a Core Group, which runs the HDFS DataNode Daemon and the YARN Node Manager Service; and Task Groups, which run the YARN Node Manager Service. EMR supports up to 50 instance groups per cluster and allows you to select an instance type for each group. You can now specify the amount of EBS storage you want to add to each instance in a given instance group. You can specify multiple EBS volumes, add EBS volumes to instances with instance storage, or even combine different volumes of different types. Here is how you specify your storage configuration in the EMR Console:



For example, if you configured a Core Group to use the m4.2xlarge instance, attached a pair of 1 TB gp2 (General Purpose SSD) volumes and want 10 instances in the group, the Core group would have 10 instances with a total of 20 volumes. Here's how you would set that up:



To learn more, read the EBS FAQ. Support for EBS is available starting with AMI 3.10 and EMR Release 4.0.


EBS Volume Performance Characteristics
Amazon EMR allows you to use several different EBS volume types: General Purpose SSD (GP2), Magnetic, and Provisioned IOPS (SSD). You can choose different types of volumes depending upon the nature of your job. Our internal testing suggests that the General Purpose SSD volumes should suffice for most of the workloads, however we recommend that you test against your own workload. One thing to note is that the General Purpose SSD volumes provide a baseline performance of 3 IOPS/GiB (up to 10,000 IOPS) with the ability to burst to 3,000 IOPS for volumes under 1,000 GiB. Please see I/O Credits and Burst Performance for more details. Here is a comparison of the volumes types:

































































General Purpose (SSD) Provisioned IOPS Magnetic
Storage Media SSD-backed SSD-backed Magnetic-backed
Max Volume Size 16 TB 16 TB 1 TB
Max IOPS per Volume 10,000 IOPS 20,000 IOPS ~100 IOPS
Max IOPS Burst Performance 3000 IOPS for volumes <= 1TB n/a Hundreds
Max Throughput per Volume 160 MB/second 320 MB/second 40-90 MB/second
Max IOPS per Node (16K) 48,000 48,000 48,000
Max Throughput per Instance 800 MB/second 800 MB/second 800 MB/second
Latency (Random Read) 1-2 ms 1-2 ms 20-40 ms
API Name gp2 io1 standard

Support for M4 and C4 Instances
You can now launch EMR clusters that use M4 and C4 instances in regions where they are available. The M4 instances feature a custom Intel Xeon E5-2676 v3 (Haswell) processor and the C4 instances are based on the Intel Xeon E5-2666 v3 processor. These instances are designed to deliver the highest level of processor performance on EC2. Both types of instances offer Enhanced Networking which delivers up to 4 times the packet rate of instances without Enhanced Networking, while ensuring consistent latency, even when under high network I/O. Both the M4 and C4 instances are EBS-Optimized by default, with additional, dedicated network capacity for I/O operations. The instances support 64-bit HVM AMIs and can be launched only within a VPC.


Please see the Amazon EMR Pricing page for more details on the prices for these instances.


Productivity Tip
You can generate a create-cluster command that represents the configuration of an existing EMR 4.x cluster, including the EBS volumes. This will allow you to recreate the cluster using the AWS Command Line Interface (CLI).



Available Now
These new features are available now and you can start using them today!


-- Abhishek Sinha, Senior Product Manager



Monday, February 15, 2016

AWS Week in Review - February 8, 2016

Let's take a quick look at what happened in AWS-land last week:

















































Monday

February 8




Tuesday

February 9




Wednesday

February 10




Thursday

February 11




Friday

February 12




Saturday

February 13




Sunday

February 14



New & Notable Open Source



  • Vault is a tool for managing secrets.

  • credstash is a little utility for managing credentials in the cloud.

  • microservices-playground is for microservices running on AWS in a Docker Container using ECS.

  • aws-role-editor is a Google Chrome extension to modify roles in the AWS Console.

  • aws-request-signer is a Google Chrome extension that signs requests to AWS endpoints using SigV4.

  • conan-aws-lambda is an AWS Lambda plugin for Conan the Deployer.

  • aws-nats is a Python and CloudFormation script to run a NATS cluster in AWS.

  • shepherd is a framework for building APIs using AWS API Gateway and AWS Lambda.

  • aws-lambda-ffmpeg ia an AWS Lambda function that resizes a video and outputs a thumbnail using FFmpeg.

  • aerosol is a DSL and Gem for defining an AWS architecture.


New SlideShare Presentations



New Customer Success Stories



  • Bazaarvoice - The company offers a technology platform and services that help customers collect and analyze consumer content, using that data to increase sales and improve their products and services.

  • Bitdefender - By using AWS, the developers at Bitdefender have additional tools to innovate and scale on demand with near-zero downtime, offering customers flexible, cost-effective security solutions.

  • The Globe and Mail - The Globe and Mail is using AWS to deliver dynamic, personalized content to its readers, helping boost reader engagement by 25 percent.

  • gumi Asia - Using AWS has enabled gumi Asia to achieve 99.5 percent availability, support demand peaks 50 percent higher than normal with no impact on performance, and avoid system redundancy and staffing costs.

  • Intermountain Healthcare - Intermountain Healthcare, using AWS and working with APN partner Syapse, can provide fast, cloud-based services to oncologists across the United States so they can deliver precision medicine to cancer patients.

  • Invision - Using AWS, InVision can offer its workforce management solution at one tenth of the cost of its physical software product, helping it to lower its costs to contact-center customers and reach a previously untapped 85 percent of the market.


New YouTube Videos



Upcoming Events



Help Wanted



Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.


-- Jeff;



New - Access Resources in a VPC from Your Lambda Functions

A few months ago I announced that you would soon be able to access resources in a VPC from your AWS Lambda functions. I am happy to announce that this much-wanted feature is now available and that you can start using it today!


Your Lambda functions can now access Amazon Redshift data warehouses, Amazon ElastiCache clusters, Amazon Relational Database Service (RDS) instances, and service endpoints that are accessible only from within a particular VPC. In order to do this, you simply select one of your VPCs and identify the relevant subnets and security groups. Lambda uses this information to set up elastic network interfaces (ENIs) and private IP addresses (drawn from the subnet or subnets that you specified) so that your Lambda function has access to resources in the VPC.


Accessing Resources in a VPC
You can set this up when you create a new function. You can also update an existing function so that it has VPC access. You can configure this feature from the Lambda Console or from the CLI. Here's how you set it up from the Console:



That's all you need to do! Be sure to read Configuring a Lambda Function to Access Resources in an Amazon VPC in the Lambda documentation if you have any questions.


Things to Know
Here are a couple of things that you should know about this new feature:


ENI & IP Address Resources - Because Lambda automatically scales based on the number of events that is needs to process, your VPC must have an adequate supply of free IP addresses on the designated subnets.


Internet Access - As soon as you enable this functionality for a particular function, the function no longer has access to the Internet by default. If your function requires this type of access, you will need to set up a Managed NAT Gateway in your VPC (see New - Managed NAT (Network Address Translation) Gateway for AWS for more information) or run your own NAT (see NAT Instances).


Security Groups - The security groups that you choose for a function will control the function's access to the resources in the subnets and on the Internet.


S3 Endpoints - You can also use this feature to access S3 endpoints within a VPC (consult New - VPC Endpoint for Amazon S3 to learn more).


Webinar - To learn more about this new feature, join our upcoming webinar, Essentials: Introducing AWS VPC Support for AWS Lambda.

--
Jeff;


TSMC will be the Sole Supplier for the Next iPhones

Send to Kindle According to a report from South Korea's Electronic Times, Taiwan Semiconductor Manufacturing (TSMC) will be the exclusive supplier of mobile processors for Apple's next iPhones (one of the suppliers of the processors for the iPhone 6S and iPhone 6S Plus) which will be using a 10-nanometre manufacturing technology. An unnamed source said ... Continue reading TSMC will be the Sole Supplier for the Next iPhones ->

Resources for Migrating Parse Applications to AWS

In light of the recent announcement that Parse will be winding down, the AWS team has been working to provide developers with some migration paths and some alternative services, as have members of the AWS community. Here's what I know about:



I also have some partner and community resources for you:



Migration Webinar
My colleague John Bury (Principal Solution Architect) has been working in the mobile space for more than 12 years. On March 1st he will lead a 200-level webinar, Migrating Mobile Apps from Parse to AWS. The webinar will run from 11 AM to Noon (PT). After an introductory look at the full range of AWS mobile services, John will lead you though the steps necessary to migrate your mobile app from Parse to AWS.

--
Jeff;

PS - Please share additional resources in the comment section and I'll add them to the post.




Wednesday, February 10, 2016

Valentine's Day Offer for Kindle Unlimited

If your Valentine is into reading, this limited deal from Amazon may just be the perfect gift for Valentine's Day. Amazon is currently offering a 25% discount for Kindle Unlimited. Memberships vary from 6 months to 24 months of unlimited reading and listening to over a million e-books and thousands of audiobooks. 6 Months: ... Continue reading Valentine's Day Offer for Kindle Unlimited ->

Tuesday, February 9, 2016

Lumberyard + Amazon GameLift + Twitch for Games on AWS

Building world-class games is a very difficult, time-consuming, and expensive process. The audience is incredibly demanding. They want engaging, social play that spans a wide variety of desktop, console, and mobile platforms. Due to the long lead time inherent in the game development and distribution process, the success or failure of the game can often be determined on launch day, when pent-up demand causes hundreds of thousands or even millions of players to sign in and take the game for a spin.


Behind the scenes, the development process must be up to this challenge. Game creators must be part of a team that includes developers with skills in story telling, game design, physics, logic design, sound creation, graphics, visual effects, and animation. If the game is network-based, the team must also include expertise in scaling, online storage, network communication & management, security.


With development and creative work that can take 18 to 36 months, today's games represent a considerable financial and reputational risk for the studio. Each new game is a make-or-break affair.


New AWS Game Services
Today I would like to tell you about a pair of new AWS products that are designed for use by professional game developers building cloud-connected, cross-platform games. We started with several proven, industry leading engines and developer tools, added a considerable amount of our own code, and integrated the entire package with our Twitch video platform and community, while also mixing in access to relevant AWS messaging, identity, and storage services. Here's what we are announcing today:


Lumberyard - A game engine and development environment designed for professional developers. A blend of new and proven technologies from CryEngine, Double Helix, and AWS, Lumberyard simplifies and streamlines game development. As a game engine, it supports development of cloud-connected and standalone 3D games, with support for asset management, character creation, AI, physics, audio, and more. On the development side, the Lumberyard IDE allows you to design indoor and outdoor environments, starting from a blank canvas. You (I just promoted you to professional game developer) can take advantage of built-in content workflows and an asset pipeline, editing game assets in Photoshop, Maya, or 3ds Max for editing and bringing them in to the IDE afterward. You can program your game in the traditional way using C++ and Visual Studio (including access to the AWS SDK for C++) or you can use our Flow Graph tool and the cool new Cloud Canvas to create cloud-connected gameplay features using visual scripting.


Amazon GameLift - Many modern games include a server or backend component that must scale in proportion to the number of active sessions. Amazon GameLift will help you to deploy and scale session-based multiplayer game servers for the games that you build using Lumberyard. You simply upload your game server image to AWS and deploy the image into a fleet of EC2 instances that scales up as players connect and play. You don't need to invest in building, scaling, running, or monitoring your own fleet of servers. Instead, you pay a small fee per daily active user (DAU) and the usual EC2 On-Demand rates for the compute capacity, EBS storage, and bandwidth that your users consume.



Twitch Integration - Modern gamers are a very connected bunch. When they are not playing themselves, they like to connect and interact with other players and gaming enthusiasts on Twitch. Professional and amateur players display their talents on Twitch and create large, loyal fan bases. In order to take this trend even further and to foster the establishment of deeper connections and stronger communities, games built with Lumberyard will be able to take advantage of two new Twitch integration features. Twitch ChatPlay allows you to build games that respond to keywords in a Twitch chat stream. For example, the audience can vote to have the player take the most desired course of action. Twitch JoinIn allows a broadcaster to invite a member of the audience into to the game from within the chat channel.


These services, like many other parts of AWS, are designed to allow you to focus on the unique and creative aspects of your game, with an emphasis on rapid turnaround and easy iteration so that you can continue to hone your gameplay until it reaches the desired level of engagement and fun.


Support Services - As the icing on this cake, we are also launching a range of support options including a dedicated Lumberyard forum and a set of tutorials (both text and video). Multiple tiers of paid AWS support are also available.


Developing with Lumberyard
Lumberyard is at the heart of today's announcement. As I mentioned earlier, it is designed for professional developers and supports development of high-quality, cross-platform games. We are launching with support for the following environments:



  • Windows - Vista, Windows 7, 8, and 10.

  • Console - PlayStation 4 and Xbox One.


Support for mobile devices and VR headsets is in the works and should be available within a couple of months.


The Lumberyard development environment runs on your Windows PC or laptop. You'll need a fast, quad-core processor, at least 8 GB of memory, 200 GB of free disk space, and a high-end video card with 2 GB or more of memory and Direct X 11 compatibility. You will also need Visual Studio 2013 Update 4 (or newer) and the Visual C++ Redistributables package for Visual Studio 2013.


The Lumberyard Zip file contains the binaries, templates, assets, and configuration files for the Lumberyard Editor. It also includes binaries and source code for the Lumberyard game engine. You can use the engine as-is, you can dig in to the source code for reference purposes, or you can customize it in order to further differentiate your game. The Zip file also contains the Lumberyard Launcher. This program makes sure that you have properly installed and configured Lumberyard and the third party runtimes, SDKs, tools, and plugins.


The Lumberyard Editor encapsulates the game under development and a suite of tools that you can use to edit the game's assets.



The Lumberyard Editor includes a suite of editing tools (each of which could be the subject of an entire blog post) including an Asset Browser, a Layer Editor, a LOD Generator, a Texture Browser, a Material Editor, Geppetto (character and animation tools), a Mannequin Editor, Flow Graph (visual programming), an AI Debugger, a Track View Editor, an Audio Controls Editor, a Terrain Editor, a Terrain Texture Layers Editor, a Particle Editor, a Time of Day Editor, a Sun Trajectory Tool, a Composition Editor, a Database View, and a UI Editor. All of the editors (and much more) are accessible from one of the toolbars at the top.


In order to allow you to add functionality to your game in a selective, modular form, Lumberyard uses a code packaging system that we call Gems. You simply enable the desired Gems and they'll be built and included in your finished game binary automatically. Lumberyard includes Gems for AWS access, Boids (for flocking behavior), clouds, game effects, access to GameLift, lightning, physics, rain, snow, tornadoes, user interfaces, multiplayer functions, and a collection of woodlands assets (for detailed, realistic forests).


Coding with Flow Graph and Cloud Canvas
Traditionally, logic for games was built by dedicated developers, often in C++ and with the usual turnaround time for an edit/compile/run cycle. While this option is still open to you if you use Lumberyard, you also have two other options: Lua and Flow Graph.


Flow Graph is a modern and approachable visual scripting system that allows you to implement complex game logic without writing or or modifying any code. You can use an extensive library of pre-built nodes to set up gameplay, control sounds, and manage effects.


Flow graphs are made from nodes and links; a single level can contain multiple graphs and they can all be active at the same time. Nodes represent game entities or actions. Links connect the output of one node to the input of another one. Inputs have a type (Boolean, Float, Int, String, Vector, and so forth). Output ports can be connected to an input port of any type; an automatic type conversion is performed (if possible).


There are over 30 distinct types of nodes, including a set (known as Cloud Canvas) that provide access to various AWS services. These include two nodes that provide access to Amazon Simple Queue Service (SQS), four nodes that provide access to Amazon Simple Notification Service (SNS), seven nodes that provide read/write access to Amazon DynamoDB, one to invoke an AWS Lambda function, and another to manage player credentials using Amazon Cognito. All of the games calls to AWS are made via an AWS Identity and Access Management (IAM) user that you configure in to Cloud Canvas.


Here's a node that invokes a Lambda function named DailyGiftLambda:



Here is a flow graph that uses Lambda and DynamoDB to implement a "Daily Gift" function:



As usual, I have barely scratched the surface here! To learn more, read the Cloud Canvas documentation in the Lumberyard User Guide.


Deploying With Amazon GameLift
If your game needs a scalable, cloud-based runtime environment, you should definitely take a look at Amazon GameLift.



You can use it to host many different types of shared, connected, regularly-synchronized games including first-person shooters, survival & sandbox games, racing games, sports games, and MOBA (multiplayer Online Battlefield Arena) games.


After you build your server-side logic, you simply upload it to Amazon GameLift. It will be converted to a Windows-based AMI (Amazon Machine Image) in a matter of minutes. Once the AMI is ready, you can create an Amazon GameLift fleet (or a new version of an existing one), point it at the AMI, and your backend will be ready to go.


Your fleets, and the game sessions, running on each fleet, are visible in the Amazon GameLift Console:



Your Flow Graph code can use the GameLift Gem to create an Amazon GameLift session and to start the session service.


To learn more, consult the Amazon GameLift documentation.


Twitch Integration
Last but definitely not least, your games can integrate with Twitch via Twitch ChatPlay and Twitch JoinIn.


As I mentioned earlier, you can create games that react to keywords entered in a designated Twitch channel. For example, here's a Flow Graph that listens for the keywords red, yellow, blue, green, orange, and violet.



Pricing and Availability
Lumberyard and Amazon GameLift are available now and you can start building your games today!


You can build and run connected and standalone games using Lumberyard at no charge. You are responsible for the AWS charges for any calls made to AWS services using the IAM user configured in to Cloud Canvas, or through calls made using the AWS SDK for C++, along with any charges for the use of GameLift.


Amazon GameLift is launching in the US East (Northern Virginia) and US West (Oregon) regions, and will be coming to other AWS regions as well. As part of AWS Free Usage tier, you can run a fleet comprised of one c3.large instance for up to 125 hours per month for a period of one year. After that, you pay the usual On-Demand rates for the EC2 instances that you use, plus the charge for 50 GB / month of EBS storage per instance, and $1.50 per month for every 1000 daily active users.

--
Jeff;