Thursday, July 30, 2015

Welcome the Newest AWS Community Heroes (Summer 2015)

I would like to extend a warm welcome to the newest AWS Community Heroes:

  • Adam Smolnik
  • Kai Henry
  • Onur Salk
  • Paolo Latella
  • Raphael Francis
  • Rob Linton

The Heroes share their enthusiasm for AWS via social media, blogs, events, user groups, and workshops. Let’s take a look at their bios to learn more.

Adam Smolnik
Adam is a Principal Software Engineer at Pitney Bowes, a global technology company offering products and solutions that enable commerce in the areas of customer information management, location intelligence, customer engagement, shipping and mailing, and global ecommerce. Prior to Pitney Bowes, Adam worked as an application developer, consultant, and designer for companies like Kroll Ontrack, IBM and EDS. He supports and publishes articles on Chmurowisko.pl, the most recognized Polish website revolving around Cloud technology and premier source of information about Amazon AWS and Cloud Computing in general.

Adam is also a co-founder of AWS User Group Poland (established in 2014), an active speaker and trainer at Cloud conferences, instructor at Cloud and Software workshops as well as co-organizer of the Cloudyna conference. Be sure to take a look at his LinkedIn profile.

Kai Henry
Kai Hendry is the founder of Webconverger, a company and open source project of the same name, supplier of Web kiosk and signage software since 2007. After graduating from the University of Helsinki with a Master’s degree in Computer Science in 2005, he travelled and worked around the world to discover insecure Web kiosks in Internet cafes and public spaces. On return to England, he engineered a secure Web kiosk operating system based on Debian and maintained it on weekends whilst in fulltime employment working upon Web technologies.

Over time, Webconverger’s popularity grew and by the end of his tenure in the telecommunication’s industry he decided move to Singapore, get married and focus on his company. Now a successful small business, Webconverger provides reliable management service for Web kiosks using AWS services such as S3 with Route 53 fail over.

Kai is an active member of the maker community in Singapore, usually found working from Hackerspace.SG, helps with the local AWS User Group Singapore Meetup group and organizes the Singapore Hack and Tell chapter.

You can find Kai on Twitter and at his home page.

Onur Salk
For 8 years Onur Salk has been leading the infrastructure and technical operations of Yemeksepeti.com, which has since been acquired by Delivery Hero. He is also responsible for Foodonclick.com, Ifood.jo, Yemek.com and Irmik.com.

He helped build Yemek.com, a fully automated and self-healing website, which runs entirely on Amazon Web Services. In a first for Turkey, he worked to migrate Foodonclick.com to AWS, achieving implementing of MS SQL Always On running in a production environment.

Onur regularly publishes AWS articles on his blog Wekanban.com and is the founder and organizer of the AWS User Group Turkey Meetup group in Istanbul. He is passionate about cloud computing, automation, configuration management and DevOps. He also enjoys programming in Python and developing open source AWS tools.

You can find Onur on Twitter, read his blog, and view his LinkedIn profile.

Paolo Latella
Paolo Latella is a Cloud Solutions Architect and AWS Technical Trainer at XPeppers, an enterprise focused on Cloud technologies and DevOps methodologies and member of the AWS Partner Network (APN). Paolo has more than 15 years of experience in IT and has worked on AWS technologies since 2008. Before joining XPeppers he was a Solution Architect Team Leader at Interact, an enterprise leader in Digital Media for the Cloud. There he followed the first Hybrid Cloud project for the Italian Public Sector.

He graduated from the University of Rome “La Sapienza” in Computer Science, publishing a thesis about “Auto configuration and monitoring of Wireless Sensors Network”. After graduating, he received a research grant for the study of advanced network systems and mission critical services at the CASPUR (Consorzio Applicazioni Supercalcolo per Università e Ricerca) now CINECA.

Paolo hosts regular meetings as the Co-Founder of AWS User Group Italia and AWS User Group Ticino. He can also be found participating at various technology conferences in Italy.

You can follow Paolo on Twitter, read his LinkedIn profile, or inspect his GitHub repos.

Raphael Francis
Raphael Francis is a proud Cebuano technopreneur. He is the Chief Technology Officer of Upteam Corporation, a worldwide supplier of authentic, curated and pre-owned high-end brands. He serves as a consultant to the management services company Penbrothers, business SaaS company Yewusoftware and was a founding member of AVA.ph, the Philippine’s first curated marketplace for premium brands. He also served as the CTO of Techforge Solutions, an IT firm that launched various brands, enterprises and online ventures.

“Sir Rafi” has genuine enthusiasm for effective mentoring. He comes from a family of teachers and educators, and was a professor himself at the Sacred Heart – Ateneo de Cebu and La Salle College of St. Benilde.

As co-leader of the AWS User Group Philippines since 2013, he regularly answers questions, gives advice and organizes events for the AWS community. Read Raphael’s LinkedIn profile to learn more.

Rob Linton
Rob Linton is the founder of Podzy, an encrypted on premise replacement for Dropbox, which was the winner of the 2013 Australian iAwards Toolsets category. Over the past 20 years as a data specialist he’s worked as a spatial information systems professional and data professional. His first company, Logicaltech Systalk has received numerous awards and commendations for product excellence, and was the winner of the Australian 2010 iAwards.

In July 2011 he founded the first AWS User Group in Australia. He is a certified Security Systems ISO 27001 auditor, and one of the few people to receive a perfect score for his SQL Server certification. His last book was Amazon Web Services: Migrate your .NET Enterprise Application to the Amazon Cloud.

In his spare time he enjoys coding in C++ on his Macbook Pro and chasing his kids away from things that break relatively easily.

Welcome Aboard
Please join me in welcoming our newest heroes!

Jeff;

Wednesday, July 29, 2015

Coming Soon – AWS Device Farm Support for iOS Apps

We launched AWS Device Farm earlier this month with support for testing apps on Android and Fire OS devices.

I am happy to give you a heads-up that you will soon be able to test your apps on Apple phones and tablets! We plan to launch support for iOS on August 4, 2015 with support for the following test automation frameworks:

You can also use the fuzz test that is built in to Device Farm. This test randomly sends user interface events to devices and reports on the results.

Here are some preliminary screen shots of the new iOS support in action. After you upload your binary to Device Farm, you will have the opportunity to select the app to test:

After you start the test (step 5 in the screen shot above), the test results and the associated screen shots will be displayed as they arrive:

With the new iOS support, you will be able to test your cross-platform titles and get reports (including high-level test results, problem patterns, logs, screenshots, and performances data) that are consistent, regardless of the platform and test framework that you use. If you use a cross-platform test framework such as Appium or Calabash, you can use the same code for Android, FireOS, and iOS tests.

Be Prepared
As I said earlier, iOS support will be available in less than a week. You can get ready now by reading the Device Farm documentation and by creating test suites and scripts using one or more of the frameworks that I mentioned above.

Jeff;

New – AWS Mobile SDK for Xamarin

We want to make sure that developers can use the programming language of their choice to build AWS-powered applications that run on many different types of devices and in a wide variety of environments. As you can see from the AWS Tools page, we already support multiple mobile (iOS, Android, and Unity) and counter-top (Alexa / Echo) devices with SDKs for JavaScript, Java, .Net, Node.js, PHP, Python, Ruby, and Go, along with IDE toolkits for Eclipse and Visual Studio.

Today we are adding a developer preview of support for Xamarin to the existing AWS SDK for .NET. Xamarin allows you to build cross-platform C# applications that run on iOS, Android, and Windows devices. The new AWS Mobile SDK for Xamarin gives your Xamarin app access to multiple AWS services including:

You can use the Xamarin Studio IDE to write, debug, and test your code:

You can also use Visual Studio with the Xamarin plugin.

Read Getting Started with the AWS Mobile SDK for .Net / Xamarin to learn how to install the SDK and to start using AWS services from a Xamarin application. Read more on the Xamarin blog.

Jeff;

Amazon S3 Update – Notification Enhancements & Bucket Metrics

We launched Amazon Simple Storage Service (S3) in the spring of 2006 with a simple blog post. Over the years we have kept the model simple and powerful while reducing prices, and adding features such as the reduced redundancy storage model, VPC endpoints, cross-region replication, and event notifications.

We launched the event notification model last year, with support for notification when objects are created via PUT, POST, Copy, or Multipart Upload. At that time the notifications applied to all of the objects in the bucket, with the promise of more control over time.

Today we are adding notification when objects are deleted, with filtering on prefixes and suffixes. We are also adding support for bucket-level Amazon CloudWatch metrics,

Notification Enhancements
You can now arrange to be notified when an object has been deleted from an S3 bucket. Like the other types of notifications, delete notifications can be delivered to an SQS queue or an SNS topic or used to invoke an AWS Lambda function. The notification indicates that a DELETE operation has been performed on an object, and can be used to update any indexing or tracking data that you maintain for your S3 objects.

Also, you can now use prefix and suffix filters to opt in to event notifications based on object name. For example, you can choose to receive DELETE notifications for the images/ prefix and the .png suffix in a particular bucket like this:

You can create and edit multiple notifications from within the Console:

CloudWatch Storage Metrics
Amazon CloudWatch tracks metrics for AWS services and for your applications and allows you to set alarms that will be triggered when a metric goes past a limit that you specify. You can now monitor and set alarms on your S3 storage usage. Available metrics include total bytes (Standard and Reduced Redundancy Storage) and total number of objects, all on a per-bucket basis. You can find the metrics in the AWS Management Console:

The metrics are updated daily, and will align with those in your AWS bill. These metrics do not include or apply to S3 objects that have been migrated (via a lifecycle rule) to Glacier.

Available Now
These features are available now and you can start using them today.

Jeff;

Now Available – Amazon Aurora

We announced Amazon Aurora last year at AWS re:Invent (see Amazon Aurora – New Cost-Effective MySQL-Compatible Database Engine for Amazon for more info). With storage replicated both within and across three Availability Zones, along with an update model driven by quorum writes, Amazon Aurora is designed to deliver high performance and 99.99% availability while easily and efficiently scaling to up to 64 TB of storage.

In the nine months since that announcement, a host of AWS customers have been putting Amazon Aurora through its paces. As they tested a wide variety of table configurations, access patterns, and queries on Amazon Aurora, they provided us with the feedback that we needed to have in order to fine-tune the service. Along the way, they verified that each Amazon Aurora instance is able to deliver on our performance target of up to 100,000 writes and 500,000 reads per second, along with a price to performance ratio that is 5 times better than previously available.

Now Available
Today I am happy to announce that Amazon Aurora is now available for use by all AWS customers, in three AWS regions. During the testing period we added some important features that will simplify your migration to Amazon Aurora. Since my original blog post provided a good introduction to many of the features and benefits of the core product, I’ll focus on the new features today.

Zero-Downtime Migration
If you are already using Amazon RDS for MySQL and want to migrate to Amazon Aurora, you can do a zero-downtime migration by taking advantage of Amazon Aurora’s new features. I will summarize the process here, but I do advise you to read the reference material below and to do a practice run first! Immediately after you migrate, you will begin to benefit from Amazon Aurora’s high throughput, security, and low cost. You will be in a position to spend less time thinking about the ins and outs of database scaling and administration, and more time to work on your application code.

If the database is active, start by enabling binary logging in the instance’s DB parameter group (see MySQL Database Log Files to learn how to do this). In certain cases, you may want to consider creating an RDS Read Replica and using it as the data source for the migration and replication (check out Replication with Amazon Aurora to learn more).

Open up the RDS Console, select your existing database instance, and choose Migrate Database from the Instance Actions menu:

Fill in the form (in most cases you need do nothing more than choose the DB Instance Class) and click on the Migrate button:

Aurora will create a new DB instance and proceed with the migration:

A little while later (a coffee break might be appropriate, depending on the size of your database), the Amazon Aurora instance will be available:

Now (assuming that the source database was actively changing) while you were creating the Amazon Aurora instance, replicate the changes to the new instance using the mysql.rds_set_external_master command, and then update your application to use the new Aurora endpoint!

Metrics Galore
Each Amazon Aurora instance reports a plethora of metrics to Amazon CloudWatch. You can view these from the Console and you can, as usual, set alarms and take actions as needed:




Easy and Fast Replication
Each Amazon Aurora instance can have up to 15 replicas, each of which adds additional read capacity. You can create a replica with a couple of clicks:

Due to Amazon Aurora’s unique storage architecture, replication lag is extremely low, typically between 10 ms and 20 ms.

5x Performance
When we first announced Amazon Aurora we expected to deliver a service that offered at least 4 times the price-performance of existing solutions. Now that we are ready to ship, I am happy to report that we’ve exceeded this goal, and that Amazon Aurora can deliver 5x the price-performance of a traditional relational database when run on the same class of hardware.

In general, this does not mean that individual queries will run 5x as fast as before (although Amazon Aurora’s fast, SSD-based storage certainly speeds things up). Instead, it means that Amazon Aurora is able to handle far more concurrent queries (both read and write) than other products. Amazon Aurora’s unique, highly parallelized access to storage reduces contention for stored data and allows it to process queries in a highly efficient fashion.

From our Partners
Members of the AWS Partner Network (APN) have been working to test their offerings and to gain operational and architectural experience with Amazon Aurora. Here’s what I know about already:

  • Business IntelligenceTableau, Zoomdata, and Looker.
  • Data IntegrationTalend, Attunity, and Informatica.
  • Query and Monitoring – Webyog, Toad, and Navicat.
  • SI and Consulting – 8K Miles, 2nd Watch, and Nordcloud.
  • Content ManagementAlfresco.

Ready to Roll
Our customers and partners have put Amazon Aurora to the test and it is now ready for your production workloads. We are launching in the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) regions, and will expand to others over time.

Pricing works like this:

  • Database Instances – You pay by the hour for the primary instance and any replicas. Instances are available in 5 sizes, with 2 to 32 vCPUs and 15.25 to 244 GiB of memory. You can also use Reserved Instances to save money on your steady-state database workloads.
  • Storage – You pay $0.10 per GB per month for storage, based on the actual number of bytes of storage consumed by your database, sampled hourly. For this price you get a total of six copies of your data, two copies in each of three Availability Zones.
  • I/O – You pay $0.20 for every million I/O requests that your database makes.

See the Amazon Aurora Pricing page for more information.

Go For It
To learn more, visit the Amazon Aurora page and read the Amazon Aurora Documentation. You can also attend the upcoming Amazon Aurora Webinar to learn more and to see Aurora in action.

Jeff;

Sunday, July 26, 2015

Amazon Home Services

I had heard about Amazon Home Services back in March, which prompted me to write a post about it, but I didn’t think much about it at the time, figuring it was a passing fad. One of Jeff Bezos’ infamous “fail fast” schemes. Well… maybe I was wrong. It has not been a … Continue reading Amazon Home Services

Saturday, July 25, 2015

Elastic MapReduce Release 4.0.0 With Updated Applications Now Available

Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. By using these frameworks and related open-source projects, such as Apache Hive and Apache Pig, you can process data for analytics purposes and business intelligence workloads. First launched in 2009 (Announcing Amazon Elastic MapReduce), we have added comprehensive console support and many, many features since then. Some of the most recent features include:

Today we are announcing Amazon EMR release 4.0.0, which brings many changes to the platform. This release includes updated versions of Hadoop ecosystem applications and Spark which are available to install on your cluster and improves the application configuration experience. As part of this release we also adjusted some of the ports and paths so as to be in better alignment with several Hadoop and Spark standards and conventions. Unlike the other AWS services which do not emerge in discrete releases and are frequently updated behind the scenes, EMR has versioned releases so that you can write programs and scripts that make use of features that are found only in a particular EMR release or a version of an application found in a particular EMR release.

If you are currently using AMI version 2.x or 3.x, read the EMR Release Guide to learn how to migrate to 4.0.0.

Application Updates
EMR users have access to a number of applications from the Hadoop ecosystem. This version of EMR features the following updates:

  • Hadoop 2.6.0 – This version of Hadoop includes a variety of general functionality and usability improvements.
  • Hive 1.0 -This version of Hive includes performance enhancements, additional SQL support, and some new security features.
  • Pig 0.14 – This version of Pig features a new ORCStorage class, predicate pushdown for better performance, bug fixes, and more.
  • Spark 1.4.1 – This release of Spark includes a binding for SparkR and the new Dataframe API, plus many smaller features and bug fixes.

Quick Cluster Creation in Console
You can now create an EMR cluster from the Console using the Quick cluster configuration experience:

Improved Application Configuration Editing
In Amazon EMR AMI versions 2.x and 3.x, bootstrap actions were primarily used to configure applications on your cluster. With Amazon EMR release 4.0.0, we have improved the configuration experience by providing a direct method to edit the default configurations for applications when creating your cluster. We have added the ability to pass a configuration object which contains a list of the configuration files to edit and the settings in those files to be changed. You can create a configuration object and reference it from the CLI, the EMR API, or from the Console. You can store the configuration information locally or in Amazon Simple Storage Service (S3) and supply a reference to it (if you are using the Console, click on Go to advanced options when you create your cluster in order to specify configuration values or to use a configuration file):

To learn more, read about Configuring Applications.

New Packaging System / Standard Ports & Paths
Our release packaging system is now based on Apache Bigtop. This will allow us to add new applications and new applications to EMR even more quickly.

Also, we have moved most ports and paths on EMR release 4.0.0 to open source standards. For more information about these changes read Differences Introduced in 4.x.

Additional EMR Configuration Options for Spark
The EMR team asked me to share a couple of tech tips with you:

Spark on YARN has the ability to dynamically scale the number of executors used for a Spark application. You still need to set the memory (spark.executor.memory) and cores (spark.executor.cores) used for an executor in spark-defaults, but YARN will automatically allocate the number of executors to the Spark application as needed. To enable dynamic allocation of executors, set spark.dynamicAllocation.enabled to true in the spark-defaults configuration file. Additionally, the Spark shuffle service is enabled by default in Amazon EMR, so you do not need to enable it yourself.

You can configure your executors to utilize the maximum resources possible on each node in your cluster by setting the maximizeResourceAllocation option to true when creating your cluster. You can set this by adding this property to the “spark” classification in your configuration object when creating your cluster. This option calculates the maximum compute and memory resources available for an executor on a node in the core node group and sets the corresponding spark-defaults settings with this information. It also sets the number of executors—by setting spark.executor.instances to the initial core nodes specified when creating your cluster. Note, however, that you cannot use this setting and also enable dynamic allocation of executors.

To learn more about these options, read Configure Spark.

Available Now
All of the features listed above are available now and you can start using them today

If you are new to large-scale data processing and EMR, take a look at our Getting Started with Amazon EMR page. You’ll find a new tutorial video, along with information about training and professional services, all aimed at getting you up and running quickly and efficiently.

Jeff;

New Amazon CloudWatch Action – Reboot EC2 Instance

Amazon CloudWatch monitors your cloud resources and applications, including Amazon Elastic Compute Cloud (EC2) instances. You can track cloud, system, and application metrics, see them in graphical form, and arrange to be notified (via a CloudWatch alarm) if they cross a threshold value that you specify. You can also stop, terminate, or recover an EC2 instance when an alarm is triggered (see my blog post, Amazon CloudWatch – Alarm Actions for more information on alarm actions).

New Action – Reboot Instance
Today we are giving you a fourth action. You can now arrange to reboot an EC2 instance when a CloudWatch alarm is triggered. Because you can track and alarm on cloud, system, and application metrics, this new action gives you a lot of flexibility.

You could reboot an instance if an instance status check fails repeatedly. Perhaps the instance has run out of memory due to a runaway application or service that is leaking memory. Rebooting the instance is a quick and easy way to remedy this situation; you can easily set this up using the new alarm action. In contrast to the existing recovery action which is specific to a handful of EBS-backed instance types and is applicable only when the instance state is considered impaired, this action is available on all instance types and is effective regardless of the instance state.

If you are using the CloudWatch API or the AWS Command Line Interface (CLI) to track application metrics, you can reboot an instance if the application repeatedly fails to respond as expected. Perhaps a process has gotten stuck or an application server has lost its way. In many cases, hitting the (virtual) reset switch is a clean and simple way to get things back on track.

Creating an Alarm
Let’s walk through the process of creating an alarm that will reboot one of my instances if the CPU Utilization remains above 90% for an extended period of time. I simply locate the instance in the AWS Management Console, focus my attention on the Alarm Status column, and click on the icon:

Then I click on Take the action, choose Reboot this instance, and set the parameters (90% or more CPU Utilization for 15 minutes in this example):

If necessary, the console will ask me to confirm the creation of an IAM role as part of this step (this is a new feature):

The role will have permission to call the “Describe” functions in the CloudWatch and EC2 APIs. It also has permission to reboot, stop, and terminate instances

I click on Create Alarm and I am all set!

This feature is available now and you can start using it today in all public AWS regions.

Jeff;