Tuesday, June 30, 2015

New – AWS Budgets and Forecasts

The dynamic, pay-as-you-go nature of the AWS Cloud gives you the opportunity to build systems that respond gracefully to changes in load while paying only for the compute, storage, network, database, and other resources that you actually consume.

Over the last couple of years, as our customer base has become increasingly sophisticated and cloud-aware, we have been working to provide equally sophisticated tools for viewing and managing costs. Many enterprises use AWS for multiple projects, often spread across multiple departments and billed directly or through linked accounts.

In the usual budget-centric environment found in an enterprise, no one likes a surprise (except if it is an AWS price reduction). Our goal is to give you a broad array of cost management tools that will provide you with the information that you need to have in order to know what you are currently spending and how much you can expect to spend in the future. We also want to make sure that you have an early warning if costs exceed your expectations for some reason.

We launched the Cost Explorer last year. This tool integrates with the AWS Billing Console and gives you reporting, analytics, and visualization tools to help you to track and manage your AWS costs.

New Budgets and Forecasts
Today we are adding support for budgets and forecasts. You can now define and track budgets for your AWS costs, forecast your AWS costs for up to three months out, and choose to receive email notification when actual costs exceed or are forecast to exceed budget costs.

Budgeting and forecasting takes place on a fine-grained basis, with filtering or customization based on Availability Zone, Linked Account, API operation, Purchase Option (e.g. Reserved), Service, and Tag.

The operations provided by these new tools replace the tedious and time-consuming manual calculations that many of our customers (both large and small) have been performing as part of their cost management and budgeting process. After running a private beta with over a dozen large-scale AWS customers, we are confident that these tools will help you to do an even better job of understanding and managing your costs.

Let’s take a closer look at these new features!

New Budgets
You can now set monthly budgets around AWS costs, customized by multiple dimensions including tags. For example, you could create budgets to track EC2, RDS, and S3 costs separately for each active development effort.

The AWS Management Console will list each of your budgets (you can also filter by name):

Here’s how you create a new budget. As you can see, you can choose to include costs related to any desired list of AWS services:

You can set alarms that will trigger based on actual or forecast costs, with email notification to a designated individual or group. These alarms make use of Amazon CloudWatch but are somewhat more abstract in order to better meet the needs of your business and accounting folks. You can create multiple alarms for each budget. Perhaps you want one alarm to trigger when actual costs exceed 80% of budget costs and another when forecast costs exceed budgeted costs.

You can also view variances (budgeted vs. actual) in the console. Here’s an example:

New Forecasts
Many AWS teams use an internal algorithm to predict demand for their offerings. They use the results to help them to allocate development and operational resources, plan and execute marketing campaigns, and more. Our new budget forecasting tool makes use of the same algorithm to present you with costs estimates that include both 80% and 95% confidence interval ranges.

As is the case with budgets, you can filter forecasts on a wide variety of dimensions. You can create multiple forecasts and you can view them in the context of historical costs.

After you create a forecast, you can view it as a line chart or as a bar chart:

As you can see from the screen shots, the forecast, budget, and confident intervals are all clearly visible:

These new features are available now and you can start using them today!

Jeff;

Friday, June 26, 2015

Amazon announces the Alexa Skills Kit, Enabling Developers to Create New Voice Capabilities

Today, Amazon announced the Alexa Skills Kit (ASK), a collection of self-service APIs and tools that make it fast and easy for developers to create new voice-driven capabilities for Alexa. With a few lines of code, developers can easily integrate existing web services with Alexa or, in just a few hours, they can build entirely new experiences designed around voice. No experience with speech recognition or natural language understanding is required—Amazon does all the work to hear, understand, and process the customer’s spoken request so you don’t have to. All of the code runs in the cloud — nothing is installed on any user device.

The easiest way to build a skill for Alexa is to use AWS Lambda, an innovative compute service that runs a developer’s code in response to triggers and automatically manages the compute resources in the AWS Cloud, so there is no need for a developer to provision or continuously run servers. Developers simply upload the code for the new Alexa skill they are creating, and AWS Lambda does the rest, executing the code in response to Alexa voice interactions and automatically managing the compute resources on the developer’s behalf.

Using a Lambda function for your service also eliminates some of the complexity around setting up and managing your own endpoint:

  • You do not need to administer or manage any of the compute resources for your service.
  • You do not need an SSL certificate.
  • You do not need to verify that requests are coming from the Alexa service yourself. Access to execute your function is controlled by permissions within AWS instead.
  • AWS Lambda runs your code only when you need it and scales with your usage, so there is no need to provision or continuously run servers.
  • For most developers, the Lambda free tier is sufficient for the function supporting an Alexa skill. The first one million requests each month are free. Note that the Lambda free tier does not automatically expire, but is available indefinitely.

AWS Lambda supports code written in Node.js (JavaScript) and Java. You can copy JavaScript code directly into the inline code editor in the AWS Lambda console or upload it in a zip file. For basic testing, you can invoke your function manually by sending it JSON requests in the Lambda console.

In addition, Amazon announced today that the Alexa Voice Service (AVS), the same service that powers Amazon Echo, is now available to third party hardware makers who want to integrate Alexa into their devices—for free. For example, a Wi-Fi alarm clock maker can create an Alexa-enabled clock radio, so a customer can talk to Alexa as they wake up, asking “What’s the weather today?” or “What time is my first meeting?” Read the press release here.

Got an innovative idea for how voice technology can improve customers’ lives? The Alexa Fund was also announced today and will provide up to $100 million in investments to fuel voice technology innovation. Whether that’s creating new Alexa capabilities with the Alexa Skills Kit, building devices that use Alexa for new and novel voice experiences using the Alexa Voice Service, or something else entirely, if you have a visionary idea, Amazon would love to hear from you.

For more details about Alexa you can check out today’s announcements on the AWS blog and Amazon Appstore blog.

AWS Public Sector Update – City on a Cloud and More

Earlier today we opened the 6th annual AWS Government, Education, and Nonprofits Symposium in Washington, DC. As part of the event we announced another City on a Cloud Challenge, an upcoming AWS Public Data Set, and some information about the overall usage and growth of AWS in this space.

City on a Cloud Challenge
We are now looking for entries for the second City on a Cloud Challenge! With awards totaling $250,000 in AWS credits, this program is designed to recognize local and regional governments (along with developers) that are pushing forward with the cloud in innovative ways.

Entries must use (or propose the use of) AWS. Prizes will be awarded to eight grand prize winners in three categories (Best Practices, Partners in Innovation, and Dream Big). Entries must be received by August 21, 2015 so that we can choose the finalists in September and announce the winners at AWS re:Invent.

Winners of the 2014 City on a Cloud Challenge included:

  • Sustainable Streets (New York City DOT)
  • Disaster Recovery (City of Asheville, North Carolina)
  • Smart Airport Experience (London City Airport)
  • City mapping (City and County of San Francisco)
  • Crime and risk mapping (Hunchlab)
  • N_Sight IQ (Neptune Technology Group)
  • ePropertyPlus inventory management
  • DKAN open data platform (Nucivic)

New AWS Public Data Set – NEXRAD (Coming Soon)
The Next Generation Weather Radar (NEXRAD) is a network of 160 high-resolution Doppler radar sites throughout the United States and select overseas locations whose data is managed by the National Oceanic and Atmospheric Administration (NOAA). NEXRAD detects precipitation and atmospheric movement and disseminates data in 5 minute intervals from each site. As part of the NOAA Big Data Project, AWS will be making NEXRAD data freely available on Amazon S3. I’ll share more information (via a blog post or Twitter) as soon as I get it.

AWS Usage and Growth
Our customers are using AWS to run their classrooms, schools, departments, agencies, and research projects. Here are some of numbers that we announced at the symposium:

  • 4,500 educational institutions use AWS.
  • 1,700 government agencies use AWS.
  • 17,000 non-profit organizations use AWS.

AWS GovCloud (US) is an isolated AWS region used by US government agencies and customers to host sensitive workloads in the cloud. On a year over year basis, the number of customers for this region has grown by 273%.

Jeff;

New – Alexa Skills Kit, Alexa Voice Service, Alexa Fund

Amazon Echo is a new type of device designed around your voice. Echo connects to Alexa, a cloud-based voice service powered (of course) by AWS. You can ask Alexa to provide information, answer questions, play music, read the news, and get results or answers instantly.

When you are in the same room as an Amazon Echo, you simply say the wake word (either “Alexa” or “Amazon”) and then make your request. For example, you might say “Alexa, when do the Seattle Mariners play next?” or “Alexa, will it ever rain in Seattle?” Behind the scenes, code running in the cloud hears, understands, and processes your spoken requests.

Today we are giving you the ability to create new voice-driven capabilities (also known as skills) for Alexa using the new Alexa Skills Kit (ASK). You can connect existing services to Alexa in minutes with just a few lines of code. You can also build entirely new voice-powered experiences in a matter of hours, even if you know nothing about speech recognition or natural language processing.

We will also be opening up the underlying Alexa Voice Service (AVS) to developers in preview form. Hardware manufacturers and other participants in the new and exciting Internet of Things (IoT) world can sign up today for notification when the preview is available. Any device that has a speaker, a microphone, and an Internet connection can integrate Alexa with a few lines of code.

In order to help to inspire creativity and to fuel innovation in and around voice technology, we are also announcing the Alexa Fund. The Alexa Fund will provide up to $100 million in investments to support developers, manufacturers, and start-ups of all sizes who are creating new designed around the human voice to improve customers’ lives.

ASK and AWS Lambda
You can build new skills for Alexa using AWS Lambda. You simply write the code using Node.js and upload it to Lambda through the AWS Management Console, where it becomes known as a Lambda function. After you upload and test your function using the sample events built in to the Console, you can sign in to the Alexa Developer Portal, register your code in the portal (by creating an Alexa App), and then use the ARN (Amazon Resource Name) of the function to connect it to the App. After you complete your testing, you can publish your App in order to make it available to Echo owners. Lambda will take care of hosting and running your code in a scalable, fault-tolerant environment. In many cases, the function that supports an Alexa skill will remain comfortably within the Lambda Free Tier. Read Developing Your Alexa Skill as a Lambda Function to get started.

ASK as a Web Service
You can also build your app as a web service and take on more of the hosting duties yourself using Amazon Elastic Compute Cloud (EC2), AWS Elastic Beanstalk, or an on-premises server fleet. If you choose any of these options, the service must be Internet-accessible and it must adhere to the Alexa app interface specification. It must support HTTPS over SSL/TLS on port 443 and it must provide a certificate that matches the domain name of the service endpoint. Your code is responsible for verifying that the request actually came from Alexa and for checking the time-based message signature. To learn more about this option, read Developing Your Alexa App as a Web Service.

Learn More
We are publishing a lot of information about ASK, AVS, and the Alexa Fund today. Here are some good links to get you started:

Jeff;

Wednesday, June 24, 2015

Focusing on Spot Instances – Let’s Talk About Best Practices

I often point to EC2 Spot Instances as a feature that can only be implemented at world-scale with any degree of utility.

Unless you have a massive amount of compute power and a multitude of customers spread across every time zone in the world, with a wide variety of workloads, you simply won’t have the ever-changing shifts in supply and demand (and the resulting price changes) that are needed to create a genuine market. As a quick reminder, Spot Instances allow you to save up to 90% (when compared to On-Demand pricing) by placing bids for EC2 capacity. Instances will run whenever your bid exceeds the current Spot Price and can be terminated (with a two minute warning) in the presence of higher bids for the same (as determined by region, availability zone, and instance type) capacity.

Because Spot Instances come and go, you need to pay attention to your bidding strategy and to your persistence model in order to maximize the value that you derive from them. Looked at another way, by structuring your application in the right way you can be in a position to save up to 90% (or, if you have a flat budget, you can get 10x as much computing done). This is a really interesting spot for you, as the cloud architect for your organization. You can exercise your technical skills to drive the cost of compute power toward zero, while making applications that are price aware and more fault-tolerant. Master the ins and outs of Spot Instances and you (and your organization) will win!

The Trend is Clear
As I look back at the history of EC2 — from launching individual instances on demand, then on to Spot Instances, Containers, and Spot Fleets — the trend is pretty clear. Where you once had to pay attention to individual, long-running instances and to list prices, you can now think about collections of instances with an indeterminate lifetime, running at the best possible price, as determined by supply and demand within individual capacity pools (groups of instances that share the same attributes). This new way of thinking can liberate you from some older thought patterns and can open the door to some new and intriguing ways to obtain massive amounts of compute capacity quickly and cheaply, so you can build really cool applications at a price you can afford.

I should point out that there’s a win-win situation when it comes to Spot. You (and your customers) win by getting compute power at the most economical price possible at a given point in time. Amazon wins because our fleet of servers (see the AWS Global Infrastructure page for a list of locations) is kept busy doing productive work. High utilization improves our cost structure, and also has an environmental benefit.

Spot Best Practices
Over the next few months, with a lot of help from the EC2 Spot Team, I am planning to share some best practices for the use of Spot Instances. Many of these practices will be backed up with real-world examples that our customers have shared with us; these are not theoretical or academic exercises. Today I would like to kick off the series by briefly outlining some best practices.

Let’s define the concept of a capacity pool in a bit more detail. As I alluded to above, a capacity pool is a set of available EC2 instances that share the same region, availability zone, operating system (Linux/Unix or Windows), and instance type. Each EC2 capacity pool has its own availability (the number of instances that can be launched at any particular moment in time) and its own price, as determined by supply and demand. As you will see, applications that can run across more than one capacity pool are in the best position to consistently access the most economical compute power. Note that capacity in a pool is shared between On-Demand and Spot instances, so Spot prices can rise from either more demand for Spot instances or an increase in requests for On-Demand instances.

Here are some best practices to get you started.

Build Price-Aware Applications – I’ve said it before: cloud computing is a combination of a business model and a technology. You can write code (and design systems) that are price-aware, and that have the potential to make your organization’s cloud budget go a lot further. This is a new area for a lot of technologists; my advice to you is to stretch your job description (and your internal model of who you are and what your job entails) to include designing for cost savings.

You can start by spending some time investigating (or by building some tools using the EC2 API or the AWS Command Line Interface (CLI)) the full range of capacity pools that are available to you within the region(s) that you use to run your app. High prices and a high degree of price variance over time indicate that many of your competitors are bidding for capacity in the same pool. Seek out pools that have lower prices and more stable prices (both current and historic) to find bargains and lower interruption rates.

Check the Price History – You can access historical prices on a per-pool basis going back 90 days (3 months). Instances that are currently very popular with our customers (the R3‘s as I write this) tend to have Spot prices that are somewhat more volatile. Older generations (including c1.8xlarge, m1.small, cr1.8xlarge, and cc2.8xlarge) tend to be much more stable. In general, picking older generations of instances will result in lower net prices and fewer interruptions.

Use Multiple Capacity Pools – Many types of applications can run (or can be easily adapted to run) across multiple capacity pools. By having the ability to run across multiple pools, you reduce your application’s sensitivity to price spikes that affect a pool or two (in general, there is very little correlation between prices in different capacity pools). For example, if you run in five different pools your price swings and interruptions can be cut by 80%.

A high-quality approach to this best practice can result in multiple dimensions of flexibility, and access to many capacity pools. You can run across multiple availability zones (fairly easy in conjunction with Auto Scaling and the Spot Fleet API) or you can run across different sizes of instances within the same family (Amazon EMR takes this approach). For example, your app might figure out how many vCPUs it is running on, and then launch enough worker threads to keep all of them occupied.

Adherence to this best practice also implies that you should strive to use roughly equal amounts of capacity in each pool; this will tend to minimize the impact of changes to Spot capacity and Spot prices.

To learn more, read about Spot Instances in the EC2 Documentation.

Stay Tuned
As I mentioned, this is an introductory post and we have a lot more ideas and code in store for you! If you have feedback, or if you would like to contribute your own Spot tips to this series, please send me (awseditor@amazon.com) a note.

Jeff;

Tuesday, June 23, 2015

New AWS Quick Starts – Trend Micro Deep Security and Microsoft Lync Server

We have prepared a pair of new AWS Quick Start Reference Deployments for you! As is the case with all AWS Quick Starts, they help you to deploy fully functional enterprise software the AWS cloud in no time flat!

Each of the reference deployments includes a AWS CloudFormation template that follows best AWS practices for security and availability. These templates can be used as-is, customized, or used as the basis for solutions that are even more elaborate.

Trend Micro Deep Security
Trend Micro Deep Security is a host-based security product that provides intrusion detection and prevention, anti-malware, host firewall, file and system integrity monitoring, and log inspection modules in a single agent running in the guest operating system.

The Quick Start (Trend Micro Deep Security on the AWS Cloud) deploys Trend Micro Deep Security version 9.5 into an Amazon VPC using AMIs from the AWS Marketplace. It includes a pair of templates. The first one provides and end-to-end deployment into a new VPC; the second one works within an existing VPC.

Microsoft Lync Server

Lync Server 2013 is a communications software platform that offers instant messaging (IM), presence, conferencing, and telephony solutions for small, medium, and large businesses.

The Quick Start (Microsoft Lync Server 2013 on the AWS Cloud) implements a small or medium-sized Lync Server environment. This environment includes a pair of Lync Server 2013 Standard Edition pools across two Availability Zones for high availability.

Jeff;

Monday, June 22, 2015

New – Tag Your Amazon Glacier Vaults

Amazon Glacier is a secure, durable, and extremely low-cost storage service for data archiving and online backup (see my post, Amazon Glacier: Archival Storage for One Penny Per GB Per Month for an introduction).

Since we introduced Glacier in the summer of 2012, we have made it even more useful by adding lifecycle management, data retrieval policies & audit logging, range retrieval, and vault access policies.

Tag Your Vaults
If you are already a Glacier user (or if you have read my intro), you know that you create archives and store them in Glacier vaults.

Today we are making Glacier even more useful by giving you the ability to tag your vaults. You can use these tags for cost allocation purposes (by department, group, or any other desired categorization) or for other forms of tracking.

Here’s how you tag a vault with a key named “Department”:

After you have tagged your vaults, you can use the AWS Cost Allocation Reports to view a breakdown of costs and usage by tag.

As part of today’s launch, we updated the design of the Glacier console. We also made some speed improvements and added a filtering mechanism to make it easier for you to locate a particular vault. For example, here are all of my “Backup” vaults:

This new feature is available now and you can start using it today! To learn more, read about Tagging Your Glacier Vaults.

Jeff;

Now Available – AWS SDK For Python (Boto3)

My colleague Peter Moon sent the guest post below to introduce the newest version of the AWS SDK for Python also known as Boto.

— Jeff;


Originally started as a Python client for Amazon S3 by Mitch Garnaat in 2006, Boto has been the primary tool for working with Amazon Web Services for many Python developers and system administrators across the world. Since its inception, Boto has been through an exciting journey of evolution driven by countless contributors from the Python community as well as AWS. It now supports almost 40 AWS services and is downloaded hundreds of thousands of times every week, according to PyPI. Thinking of the journey Boto has been through, I am very excited today to announce the next chapter in its history: the general availability of Boto3, the next major version of Boto.

Libraries must adapt to changes in users’ needs and also to changes in the platforms on which they run. As AWS’s growth accelerated over the years, the speed at which our APIs are updated has also gotten faster. This required us to devise a scalable method to quickly deliver support for multiple API updates every week, and this is why AWS API support in Boto3 is almost completely data-driven. Boto3 has ‘client’ classes that are driven by JSON-formatted API models that describe AWS APIs, so most new service features only require a simple model update. This allows us to deliver support for API changes very quickly, in consistent and reliable manner.

Boto comes with many convenient abstractions that hide explicit HTTP API calls and offer intuitive Python classes for working with AWS resources such as Amazon Elastic Compute Cloud (EC2) instances or Amazon Simple Storage Service (S3) buckets. We formalized this concept in Boto3 and named it Resource APIs, which are also data-driven by resource models that build on top of API models. This architecture allows us to deliver convenient object-oriented abstractions in a scalable manner not just for Boto3, but other AWS SDKs by sharing the same models across languages.

Python 3 had been one of the most frequent feature requests from Boto users until we added support for it in Boto last summer with much help from the community. While working on Boto3, we have kept Python 3 support in laser focus from the get go, and each release we publish is fully tested on Python versions 2.6.5+, 2.7, 3.3, and 3.4. So customers using any of these Python versions can have full confidence that Boto3 will work in their environment.

Lastly, while we encourage all new projects to use Boto3 instead of Boto, and existing projects to migrate to Boto3, we understand migrating existing code base to a new major version can be difficult, time-consuming, or sometimes even nearly impossible. To alleviate the pain, Boto3 has a new top-level module name (boto3), so it can be used side-by-side with your existing code that uses Boto. This makes it easy for customers to start using all new features and API support available in Boto3, even if they’re only making incremental updates to an existing project.

As always, you can find us on GitHub (https://github.com/boto/boto3). We would love to hear any questions or feedback you have in the Issues section of the repository.

To get started, install Boto3 and read the docs!

$ pip install boto3

Peter Moon, Senior Product Manager, AWS SDKs and Tools

AWS Week in Review – June 15, 2015

Let’s take a quick look at what happened in AWS-land last week:

Monday, June 15
Tuesday, June 16
Wednesday, June 17
Thursday, June 18
Friday, June 19

Upcoming Events

Upcoming Events at the AWS Loft (San Francisco)

  • June 23 – Behind the Scenes with SignalFx-Operating a SaaS product at Scale with Microservices, DevOps, and Self-Service Monitoring ( 6 PM – 7:30 PM).
  • June 26 – AWS Pop-up Loft Hack Series Sponsored by Intel (10 AM – 6 PM).

Upcoming Events at the AWS Loft (New York)

  • June 25 – Chef Bootcamp (10 AM – 6 PM).
  • June 25 – Oscar Health (6:30 PM).
  • June 26 – AWS Bootcamp (10 AM – 6 PM).
  • June 29 – Chartbeat (6:30 PM).
  • June 30 – Picking the Right Tool for the Job (HTML5 vs. Unity) (Noon – 1 PM).
  • June 30 – So You Want to Build a Mobile Game? (1 PM – 4:30 PM).
  • June 30 – Buzzfeed (6:30 PM).
  • July 6 – AWS Bootcamp (10 AM – 6 PM).
  • July 7 – Dr. Werner Vogels (Amazon CTO) + Startup Founders (6:30 PM).
  • July 7 – AWS Bootcamp (10 AM – 6 PM).
  • July 8 – Sumo Logic Panel and Networking Event (6:30 PM).
  • July 9- AWS Activate Social Event (7:00 PM – 10 PM).
  • July 10 – Getting Started with Amazon EMR (Noon – 1 PM).
  • July 10 – Amazon EMR Deep Dive (1 PM – 2 PM).
  • July 10 – How to Build ETL Workflows Using AWS Data Pipeline and EMR (2 – 3 PM).
  • July 14 – Chef Bootcamp (10 AM – 6 PM).
  • July 15 – Chef Bootcamp (10 AM – 6 PM).
  • July 16 – Science Logic (11 AM – Noon).
  • July 16 – Intel Lustre (4 PM – 5 PM).
  • July 17 – Chef Bootcamp (10 AM – 6 PM).
  • July 22 – Mashery (11 AM – 3 PM).
  • July 23 – An Evening with Chef (6:30 PM).
  • July 29 – Evident.io (6:30 PM).
  • August 5 – Startup Pitch Event and Summer Social (6:30 PM).
  • August 25 – Eliot Horowitz, CTO and Co-Founder of MongoDB (6:30 PM).
  • AWS Summits.

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Jeff;

Friday, June 19, 2015

New Preferred Payment Currency for AWS – Canadian Dollars (CAD)

Earlier this year we gave you the option to set the preferred payment currency for your AWS account, with a choice of twelve currencies when you use an eligible Visa or Mastercard to pay your bill. Today I am pleased to announce that we are giving you the option to specify Canadian Dollars (CAD) as your desired payment currency:

The change takes effect immediately; you will be able to view and pay your AWS bill in the currency that you choose. Your preferred currency will be used in the Billing Console Dashboard, the Bills page, and in your Payment History. Pricing for AWS services and the AWS Billing Reports will continue to be shown in USD.

Jeff;