Tuesday, September 29, 2015

New AWS Digital Library for Big Data Solutions

My colleague Luis Daniel Soto has been working with AWS Community Hero Lynn Langit to create a comprehensive collection of resources for customers who are ready to run Big Data applications on AWS!

Here’s what they have to say….

— Jeff;


Today the AWS Marketplace is launching a new on-line video library designed to help our customers find AWS Marketplace vendor solutions, as well as accelerate and manage short and long-term data integration, business intelligence and advanced analytics projects for their AWS cloud and on-premises data.

The AWS Marketplace Digital Library for Big Data provides business and technical content from AWS Marketplace technology vendors and case studies from customers who have built end-to-end Big Data solutions. The segments are hosted by cloud and Big Data architect Lynn Langit and organized around a common set of functionality to help organizations and individuals find the AWS Marketplace vendor solutions to address their particular needs.

The library is hosted on a video webcasting platform which allows our customers to interact with AWS Marketplace partners, by asking questions as they watch the demos and interviews in split-screen mode. Here’s a sample:

If you are an APN Partner and want to learn more or want to be part of the AWS Digital Library, visit the new Big Data Partner Solutions page.

— Luis and Lynn

The Startup Experience at AWS re:Invent

AWS re:Invent is just over one week away—as I prepare to head to Vegas, I’m pumped up about the chance to interact with AWS-powered startups from around the world. One of my favorite parts of the week is being able to host three startup-focused sessions Thursday afternoon:

The Startup Scene in 2016: a Visionary Panel [Thursday, 2:45PM]
In this session, I’ll moderate a diverse panel of technology experts who’ll discuss emerging trends all startups should be aware of, including how local governments, microeconomic trends, evolving accelerator programs, and the AWS cloud are influencing the global startup scene. This panel will include:

  • Tracy DiNunzio, Founder & CEO, Tradesy
  • Michael DeAngelo, Deputy CIO, State of Washington
  • Ben Whaley, Founder & Principal Consultant, WhaleTech LLC
  • Jason Seats, Managing Director (Austin), & Partner, Techstars

CTO-to-CTO Fireside Chat [Thursday, 4:15 PM]
This is one of my favorite sessions as I get a chance to sit down and get inside the minds of technical leaders behind some of the most innovative and disruptive startups in the world. I’ll have 1x1 chats with the following CTOs:

  • Laks Srini, CTO and Co-founder, Zenefits
  • Mackenzie Kosut, Head of Technical Operations, Oscar Health
  • Jason MacInnes, CTO, DraftKings
  • Gautam Golwala, CTO and Co-founder, Poshmark

4th Annual Startup Launches [Thursday, 5:30 PM]
To wrap up our startup track, in the 4th Annual Startup Launches event we’ll invite five AWS-powered startups to launch their companies on stage, immediately followed by a happy hour. I can’t share the lineup as some of these startups are in stealth mode, but I can promise you this will be an exciting event with each startup sharing a special offer, exclusive to those of you in attendance.

Other startup activities

Startup Insights from a Venture Capitalists Perspective [Thursday, 1:30 PM]
Immediately before I take the stage, you can join a group of venture capitalists as they share insights and observations about the global startup ecosystem: each panelist will share the most significant insight they’ve gained in the past 12 months and what they believe will be the most impactful development in the coming year.

The AWS Startup Pavilion [Tuesday – Thursday]
If you’re not able to join the startup sessions Thursday afternoon, I encourage you to swing by the AWS Startup Pavilion (within re:Invent Central, booth 1062) where you can meet the AWS startup team, mingle with other startups, chat 1:1 with an AWS architect, and learn about AWS Activate.

Startup Stop on the re:Invent Pub Crawl [Wednesday evening]
And to relax and unwind in the evening, you won’t want to miss the startup stop on the re:Invent pub crawl, at the Rockhouse within The Grand Canal Shoppes at The Venetian. This is the place to be for free food, drinks, and networking during the Wednesday night re:Invent pub crawl.

Look forward to seeing you in Vegas!

New – Receive and Process Incoming Email with Amazon SES

We launched the Amazon Simple Email Service (SES) way back in 2011, with a focus on deliverability — getting mail through to the intended recipients. Today, the service is used by Amazon and our customers to send billions of transactional and marketing emails each year.

Today we are launching a much-requested new feature for SES. You can now use SES to receive email messages for entire domains or for individual addresses within a domain. This will allow you to build scalable, highly automated systems that can programmatically send, receive, and process messages with minimal human intervention.

You use sophisticated rule sets and IP address filters to control the destiny of each message. Messages that match a rule can be augmented with additional headers, stored in an S3 bucket, routed to an SNS topic, passed to a Lambda function, or bounced.

Receiving and Processing Email
In order to make use of this feature you will need to verify that you own the domain of interest. If you have already done this in order to use SES to send email, then you are already good to go.

Now you need to route your incoming email to SES for processing. You have two options here. You can set the domain’s MX (Mail Exchange) record to point to the SES SMTP endpoint in the region where you want to process incoming email. Or, you can configure your existing mail handling system to forward mail to the endpoint.

The next step is to figure out what you want to do with the messages. To do this, you need to create some receipt rules. Rules are grouped into rule sets (order matters within the set) and can apply to multiple domains. Like most aspects of AWS, rules and rule sets are specific to a particular region. You can have one active rule set per AWS region; if you have no such set, then all incoming email will be rejected.

Rules have the following attributes:

  • Enabled – A flag that enables or disables the rule.
  • Recipients – A list of email addresses and/or domains that the rule applies to. If this attribute is not supplied, the rule matches all addresses in the domain.
  • Scan – A flag to request spam and virus scans (default is true).
  • TLS – A flag to require that mail matching this rule is delivered over a connection that is encrypted with TLS.
  • Action List -An ordered list of actions to perform on messages that match the rule.

When SES receives a message, it performs several checks before it accepts the message for further processing. Here’s what happens:

  • The source IP address is checked against an internal list maintained by SES, and rejected if so (this list can be overridden using an IP address filter that explicitly allows the IP address).
  • The source IP address is checked against your IP address filters, and rejected if so directed by the filter.
  • The message is checked to see if it matches any of the recipients specified in a rule, or if there’s a domain level match, and accepted if so.

Messages that do not match a rule do not cost you anything. After a message has been accepted, SES will perform the actions associated with the matching rule. The following actions are available:

  • Add a header to the message.
  • Store the message in a designated S3 bucket, with optional encryption using a key stored in AWS Key Management Service (KMS). The entire message (headers and body) must be no larger than 30 megabytes in size for this action to be effective.
  • Publish the message to a designated SNS topic. The entire message (headers and body) must be no larger than 150 kilobytes in size for this action to be effective.
  • Invoke a Lambda function. The invocation can be synchronous or asynchronous (the default).
  • Return a specified bounce message to the sender.
  • Stop processing the actions in the rule.

The actions are run in the order specified by the rule. Lambda actions have access to the results of the spam and virus scans and can take action accordingly. If the Lambda function needs access to the body of the message, a preceding action in the rule must store the message in S3.

A Quick Demo
Here’s how I would create a rule that passes incoming email messages to a Lambda function (MyFunction) notifies an SNS topic (MyTopic), and then stores the messages in an S3 bucket (MyBucket) after encrypting them with a KMS key (aws/ses):

I can see all of my rules at a glance:

Here’s a Lambda function that will stop further processing if a message fails any of the spam or virus checks. In order for this function to perform as expected, it must be invoked in synchronous (RequestResponse) fashion.

exports.handler = function(event, context) {
    console.log('Spam filter');
    
    var sesNotification = event.Records[0].ses;
    console.log("SES Notification:\n", JSON.stringify(sesNotification, null, 2));
 
    // Check if any spam check failed
    if (sesNotification.receipt.spfVerdict.status      === 'FAIL'
        || sesNotification.receipt.dkimVerdict.status  === 'FAIL'
        || sesNotification.receipt.spamVerdict.status  === 'FAIL'
        || sesNotification.receipt.virusVerdict.status === 'FAIL')
    {
        console.log('Dropping spam');
        // Stop processing rule set, dropping message
        context.succeed({'disposition':'STOP_RULE_SET'});
    }
    else
    {
        context.succeed();   
    }
};

To learn more about this feature, read Receiving Email in the Amazon SES Developer Guide.

Pricing and Availability
You will pay $0.10 for every 1000 emails that you receive. Messages that are 256 KB or larger are charged for the number of complete 256 KB chunks in the message, at the rate of $0.09 per 1000 chunks. A 768 KB message counts for 3 chunks. You’ll also pay for any S3, SNS, or Lambda resources that you consume. Refer to the Amazon SES Pricing page for more information.

This new feature is available now and you can start using it today. Amazon SES is available in the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) regions.

— Jeff;

Saturday, September 26, 2015

Amazon Glacier Update – Third-Party SEC 17a-4(f) Assessment for Vault Lock

Amazon Glacier is designed to store any amount of archival or backup data with high durability. Amazon Glacier is a very cost-effective solution (as low as $0.007 per gigabyte per month) for data that is infrequently accessed, and where a retrieval time of several hours is acceptable.

Earlier this year we introduced a new Amazon Glacier compliance feature called Vault Lock (see my post, Create Write-Once-Read-Many Archive Storage with Amazon Glacier, to learn more). As I wrote at the time, this feature allows you to lock your Amazon Glacier vaults with compliance controls that are designed (per SEC Rule 17a-4(f)) to help meet the requirement that “electronic records must be preserved exclusively in a non-rewritable and non-erasable format.”

That announcement brought Amazon Glacier to the attention of AWS customers in the financial services industry. Large banks, broker-dealers, and securities clearinghouses have all expressed interest in this important new feature.

New Third-Party Assessment Report
Today I am pleased to be able to announce that we have received a third-party assessment report that speaks to Amazon Glacier’s ability to help meet the requirements of SEC 17a-4(f).

This assessment is provided by Cohasset Associates, a highly respected consulting firm with more than 40 years of experience and knowledge related to the legal, technical, and operational issues associated with the records management practices of companies regulated by the US SEC (Securities and Exchange Commission) and the US CFTC (Commodity Futures Trading Commission).

The full assessment (which is actually fairly interesting) provides a detailed look at the logic that Amazon Glacier uses to create immutable policies, along with a step-by-step examination and exposition of the controls that are used to protect Amazon Glacier vaults for compliance use cases once they have been locked (again, more information on this procedure can be found in the blog post that I referenced above).

View the Amazon Glacier with Vault Lock Assessment to learn more. For information about other compliance features, visit the AWS Compliance Center.

Jeff;

Wednesday, September 23, 2015

Now Available – Amazon Linux AMI 2015.09

My colleague Max Spevack runs the team that produces the Amazon Linux AMI. He wrote the guest post below to announce the newest release!

Jeff;


The Amazon Linux AMI is a supported and maintained Linux image for use on Amazon EC2.

We offer new major versions of the Amazon Linux AMI after a public testing phase that includes one or more Release Candidates. The Release Candidates are announced in the EC2 forum and we welcome feedback on them.

Launching 2015.09 Today
Today we announce the 2015.09 Amazon Linux AMI, which is supported in all regions and on all current-generation EC2 instance types. The Amazon Linux AMI supports both PV and HVM mode, as well as both EBS-backed and Instance Store-backed AMIs.

You can launch this new version of the AMI in the usual ways. You can also upgrade an existing EC2 instance by running the following commands:

$ sudo yum clean all
$ sudo yum update

And then rebooting the instance.

New Kernel
A major new feature in this release is the 4.1.7 kernel, which is the most recent long-term stable release kernel. Of particular interest to many customers is the support for OverlayFS in the 4.x kernel series.

New Features
The roadmap for the Amazon Linux AMI is driven in large part by customer requests. During this release cycle, we have added a number of features as a result of these requests; here’s a sampling:

  • Based on numerous customer requests and in order to support joining Amazon Linux AMI instances to an AWS Directory Service directory, we have added Samba 4.1 to the Amazon Linux AMI repositories, available via sudo yum install samba.
  • Numerous customers have asked for PostgreSQL 9.4 and it is now available in our Amazon Linux AMI repositories as a separate package from PostgreSQL 9.2 and 9.3. PostgreSQL 9.4 is available via sudo yum install postgresql94 and the 2015.09 Amazon Linux AMI repositories include PostgreSQL 9.4.4.
  • A frequent customer request has been MySQL 5.6, and we are pleased to offer it in the 2015.09 repositories as a separate package from MySQL 5.1 and 5.5. MySQL 5.6 is available via sudo yum install mysql56 and the 2015.09 Amazon Linux AMI repositories include MySQL 5.6.26.
  • We introduced support for Docker and Go in our 2014.03 AMI, and we continue to follow upstream developments in each. The lead-up to the 2015.09 release included an update to Go 1.4 and to Docker 1.7.1.
  • We already provide Python 2.6, 2.7 (default), and 3.4 in the Amazon Linux AMI, but several customers have also asked for the PyPy implementation of Python. We’re pleased to include PyPy 2.4 in our preview repository. PyPy 2.4 is compatible with Python 2.7.8 and is installable via sudo yum --enablerepo=amzn-preview install pypy.
  • In our 2015.03 release we added an initial preview of the Rust programming language. Upstream development has continued on this language, and we have updated from Rust 1.0 to Rust 1.2 for the 2015.09 release. You can install the Rust compiler by running sudo yum --enablerepo=amzn-preview install rust.

The release notes contain a longer discussion of the new features and updated packages, including an updated version of Emacs prepared specially for Jeff in order to ensure timely publication of this blog post!

— Max Spevack, Development Manager, Amazon Linux AMI.

PS – If you enjoy the Amazon Linux AMI offering and would like to work on future versions, let us know!

Monday, September 21, 2015

Announcing the AWS Pop-up Loft in Berlin

The AWS Pop-up Lofts in San Francisco and New York have become hubs and working spaces for developers, entrepreneurs, students, and others who are interested in working with and learning more about AWS. They come to learn, code, meet, collaborate, ask questions, and to hang out with other like-minded folks. I expect the newly opened London Loft to serve as the same type of resource for the UK.

I’m happy to be able to announce that we will be popping up a fourth loft, this one in Berlin. Once again, we have created a unique space and assembled a full calendar of events, with the continued help from our friends at Intel. We look forward to using the Loft to meet and to connect with our customers, and expect that it will be a place that they visit on a regular basis.

Startups and established businesses have been making great use of our new Europe (Frankfurt) region; in fact, it is currently growing even faster than all of our other international regions! While this growth has been driven by many factors, we do know that startups in Berlin have been early adopters of the AWS cloud, with some going all the way back to 2006. Since then some of the most well-known startups in Germany, and across Europe, have adopted AWS including SoundCloud, Foodpanda, and Zalando.

With a high concentration of talented, ambitious entrepreneurs, Berlin is a great location for the newest Pop-up Loft. Startups and other AWS customers in the area have asked for access to more local technical resources and expertise in order to help them to continue to grow and to succeed with AWS.

Near Berlin Stadtmitte Station
This loft is located on the 5 floor of Krausenstrasse 38 in Berlin, close to Stadtmitte Station and convenient to Spittelmarkt. The opening party will take place on October 14th and the Loft will open for business on the morning of October 15th. After that it will be open from 10 AM to 6 PM Monday through Friday, with special events in the evening.

During the day, you will have access to the Ask an Architect Bar, daily education sessions, Wi-Fi, a co-working space, and snacks, all at no charge. There will also be resources to help you to create, run, and grow your startup including educational sessions from local AWS partners, accelerators, and incubators including Axel Springer’s Plug & Play and Deutsche Telecom’s Hub:Raum.

Ask an Architect
Step up to the Ask an Architect Bar with your code, architecture diagrams, and your AWS questions at the ready! Simply walk in. You will have access to deep technical expertise and will be able to get guidance on AWS architecture, usage of specific AWS services and features, cost optimization, and more.

Echo Hackathon
My colleague David Isbitski will be running an Alexa Hackathon at the Loft. After providing an introduction to the Amazon Echo, David will show you how to build your first Alexa Skill using either AWS Lambda or AWS Elastic Beanstalk (your choice). He will show you how to monitor it using Amazon CloudWatch and will walk you through the process of certifying the Skill as a prerequisite to making it available to customers later this year. The event will conclude with an open hackathon.

AWS Education Sessions
During the day, AWS Solution Architects, Product Managers, and Evangelists will be leading 60-minute educational sessions designed to help you to learn more about specific AWS services and use cases. You can attend these sessions to learn about Mobile & Gaming, Databases, Big Data, Compute & Networking, Architecture, Operations, Security, Machine Learning, and more, all at no charge. Hot startups such as EyeEm, Zalando, and Stormforger will talk about how they use AWS.

Startup Education Sessions
AWS startup community representatives, Berlin-based incubators & accelerators, startup scene influencers & hot startup customers of AWS will share best-practices, entrepreneurial know-how and lessons learned. Pop in to learn the art of pitching, customer validation & profiling, PR for startups & corporations. Get to know Axel Springer’s accelerator Plug & Play and the Hub:Raum incubator of the Deutsche Telekom.

The Intel Perspective
AWS and Intel share a passion for innovation and a history of helping startups to succeed. Intel will bring their newest technologies to Berlin, with talks and training that focus on the Internet of Things and the latest Intel Xeon processors.

On the Calendar
Here are some of the events that we have scheduled for October and November.

Tech Sessions:

  • October 15 – Processing streams of data with Amazon Kinesis and Other Tools (10 AM – 11 AM).
  • October 15 – STUPS – A Cloud Infrastructure for Autonomous Teams (Zalando) (5 PM – 6 PM).
  • October 19 – Building a global real-time discovery platform on AWS (Rocket Internet) (6 PM – 7 PM).
  • October 23 – Amazon Echo hackathon (10 AM – 4 PM).
  • October 27 – DevOps at Amazon: A Look at Our Tools and Processes (9 – 10 AM).
  • October 27 – Automating Software Deployments with AWS CodeDeploy (10 AM – 11 AM).
  • October 30 – Redshift Deep Dive (5 PM – 6 PM).
  • November 3 – Cost Optimization Workshop (5 PM – 6 PM).
  • November 3 – Amazon Aurora (6 PM – 7 PM).
  • November 6 – Introduction to Amazon Machine Learning (9 AM – 10 AM).
  • November 10 – Security Master Class (6 PM – 7 PM).

Business Sessions:

  • October 23 – Lessons Learned from 7 Accelerator Programs (6 PM – 7 PM).
  • October 26 – Funding cycles and Term Sheets (5 PM – 6 PM).
  • November 9 – Things to consider when PR-ing your startup (6 PM – 7 PM).

Get-Togethers & Networking:

  • October 22 – Berlin’s Godfather of Tech (6 PM – 7 PM).
  • November 11 – Watch out: The Bavarians are in town! (6 8 PM).

If you would like to learn more about a topic that’s not on this list, please let us know (you can stop by the Loft in person or you can leave a comment on this post).

Come in and Say Hello
Please feel free to stop in and say hello to my colleagues at the Berlin Loft if you happen to find yourself in Berlin!

Jeff;

Wednesday, September 16, 2015

AWS Storage Update – New Lower Cost S3 Storage Option & Glacier Price Reduction

Like all AWS services, the Amazon S3 team is always listening to customers in order to better understand their needs. After studying a lot of feedback and doing some analysis on access patterns over time, the team saw an opportunity to provide a new storage option that would be well-suited to data that is accessed infrequently.

The team found that many AWS customers store backups or log files that are almost never read. Others upload shared documents or raw data for immediate analysis. These files generally see frequent activity right after upload, with a significant drop-off as they age. In most cases, this data is still very important, so durability is a requirement. Although this storage model is characterized by infrequent access, customers still need quick access to their files, so retrieval performance remains as critical as ever.

New Infrequent Access Storage Option
In order to meet the needs of this group of customers, we are adding a new storage class for data that is accessed infrequently. The new S3 Standard – Infrequent Access (Standard – IA) storage class offers the same high durability, low latency, and high throughput of S3 Standard. You now have the choice of three S3 storage classes (Standard, Standard – IA, and Glacier) that are designed to offer 99.999999999% (eleven nines) of durability.‎ Standard – IA has an availability SLA of 99%.

This new storage class inherits all of the existing S3 features that you know (and hopefully love) including security and access management, data lifecycle policies, cross-region replication, and event notifications.

Prices for Standard – IA start at $0.0125 / gigabyte / month (one and one-quarter US pennies), with a 30 day minimum storage duration for billing, and a $0.01 / gigabyte charge for retrieval (in addition to the usual data transfer and request charges). Further, for billing purposes, objects that are smaller than 128 kilobytes are charged for 128 kilobytes of storage. We believe that this pricing model will make this new storage class very economical for long-term storage, backups, and disaster recovery, while still allowing you to quickly retrieve older data if necessary.

You can define data lifecycle policies that move data between Amazon S3 storage classes over time. For example, you could store freshly uploaded data using the Standard storage class, move it to Standard – IA 30 days after it has been uploaded, and then to Amazon Glacier after another 60 days have gone by.

The new Standard – IA storage class is simply one of several attributes associated with each S3 object. Because the objects stay in the same S3 bucket and are accessed from the same URLs when they transition to Standard – IA, you can start using Standard – IA immediately through lifecycle policies without changing your application code. This means that you can add a policy and reduce your S3 costs immediately, without having to make any changes to your application or affecting its performance.

You can choose this new storage class (which is available today in all AWS regions) when you upload new objects via the AWS Management Console:

You can set up lifecycle rules for each of your S3 buckets. Here’s how you would establish the policies that I described above:

These functions are also available through the AWS Command Line Interface (CLI), the AWS Tools for Windows PowerShell, the AWS SDKs, and the S3 API.

Here’s what some of our early users have to say about S3 Standard – Infrequent Access:

“For more than 13 years, SmugMug has provided unlimited storage for our customer’s priceless photos. With many petabytes of them stored on Amazon S3, it’s vital that customers have immediate, instant access to any of them at a moment’s notice – even if they haven’t been viewed in years. Amazon S3 Standard – IA offers the same high durability and performance as Amazon S3 Standard so we can continue to deliver the same amazing experience for our customers even as their cameras continue to shoot bigger, higher-quality photos and videos.”

Don MacAskill, CEO & Chief Geek

SmugMug

“We store a ton of video, and in many cases an object in Amazon S3 is the only copy of a user’s video. This means durability is absolutely critical, and so we are thrilled that Amazon S3 Standard – IA lets us significantly reduce storage costs on our older video objects without sacrificing durability. We also really appreciate how easy it is to start using Amazon S3 Standard – IA. With a few clicks we set up lifecycle policies that will transition older objects to Amazon S3 Standard – IA at regular intervals –we don’t have to worry about migrating them to new buckets, or impacting the user experience in any way.”

Brian Kaiser, CTO

Hudl

See the S3 Pricing page for complete pricing information on this new storage class.

Reduced Price for Glacier Storage
Effective September 1, 2015, we are reducing the price for data stored in Amazon Glacier from $0.01 / gigabyte / month to $0.007 / gigabyte / month. As usual, this price reduction will take effect automatically and you need not do anything in order to benefit from it. This price is for the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) regions; take a look at the Glacier Pricing page for full information on pricing in other regions.

Jeff;

Monday, September 14, 2015

Moving Past Microsoft Windows Server 2003 End-of-Life Using AWS

In the guest post below, my colleagues Bryan Nairn and Niko Pamboukas list some options for those of you who are still running your applications on Windows Server 2003.

— Jeff;


As many of you may already know, on July 14th 2015 Microsoft ended its extended support for Windows Server 2003. Microsoft has published and maintains a support lifecycle for their operating systems to provide clarity on the availability of support for their products. Once an operating system gets to a certain age, and extended support comes to an end, Microsoft stops issuing security and other updates.

Twelve years have passed since the original release of Windows 2003 and there are still a large number of businesses running critical applications and workloads on the Windows Server 2003 family of products. If you are one of these organizations still running Windows 2003 based workloads, you are not alone. Some industry experts estimate that there are more than 10 million servers running 2003 today. Some of these workloads are virtualized, however many of them are installed on bare metal. In many cases these workloads are running on the original hardware and the underlying physical servers are close to the end of their useful life.

The latest hardware currently available in the market may not necessarily be compatible with Windows Server 2003, thus making your purchasing decisions complex. Likewise, migration to a newer operating system version will likely require the purchase of new hardware, as the newer system will not necessarily contain all the drivers for the existing hardware.

This can present challenges for you, and many other organizations like yours, when considering what to do with your Windows Server 2003 infrastructure. We understand that it takes time to plan and execute a migration, and we are here to help. Whether you are maintaining 32-bit applications in the cloud, moving to a modern Microsoft Windows Server operating system or rewriting legacy applications, AWS can provide you with production-ready options for migration planning.

Here are some ideas and resources to help you to assess, plan and execute on your migration strategy for Windows Server 2003.

Move Your 32-bit Apps to the Cloud
It’s a common misconception that you cannot run 32-bit applications in the cloud. Amazon Elastic Compute Cloud (EC2) offers 32-bit instances that you can leverage today. You can start with our 32-bit Windows Server 2003 or Windows Server 2008 Amazon Machine Images (AMIs) or you can use VM Import to bring your own Windows virtual machines images in to EC2. These options give you the breathing room you need to stay on 32-bit while you work on additional migration options for your applications.

You can also run your 32-bit applications on 64-bit instance types. This will give you additional options, including access to more than 4 GB of memory via PAE. In order to take advantage of this feature, you’ll need to contact AWS Developer Support.

Migrate to a Modern Operating System
If you are currently running 32-bit Windows 2003 or 2008 EC2 instances but are ready to migrate, now is the perfect time to get onto the latest version. AWS supports in-place upgrades from Windows 2003 to newer Windows operating systems; you can find the details for how to perform OS upgrades by visiting the EC2 Windows upgrade documentation page. As described in the documentation, this process updates the network driver on the instance so that it can be accessed via Remote Desktop after the OS has been upgraded.

Modernize Legacy Apps
When you are ready to start the process of modernizing legacy Windows Server 2003 applications, you can find the right resources for your needs with just a few clicks. AWS has an extensive partner network to help migrate applications to newer versions of Windows. The AWS Windows and .NET Developer Center provides tools, documentation, and code samples. The AWS SDK for .NET (which includes its own library, code samples, and Visual Studio templates) makes it easy to build your applications on Windows and .NET. If you are a Visual Studio user, it’s easy to get started with the SDK using the AWS Toolkit for Visual Studio. You can find more info on the EC2 Developer Resources page. You can also get connected and join the community of developers running Windows and .NET on AWS by visiting our Community Forum or AWS on Github.

If you need help with the migration of your Windows Server 2003 applications, AWS offers various levels of support including technical documentation, the AWS Support Center, and access to qualified AWS partners. AWS partners specialize in cloud migration services; they are ready to help you assess what applications need to move, identify any risks, gaps and/or modifications needed to migrate smoothly even when migrating unsupported applications without redeploying.

AWS as a Platform for Your Needs
In addition to Windows Server 2008 and Windows Server 2012 being more secure and supported software solutions, your move from on-premises infrastructure to the cloud can bring additional benefits: the AWS cloud has been architected to provide a cost effective, flexible and secure cloud computing environment. Since you can provision resources as your business dictates and you only pay for what you use, the cost savings of migrating to AWS can be significant. In addition, Amazon Virtual Private Cloud provides you with an additional layer of security by enabling you to create your own logically isolated networks, which you can provision your resources into. With VPC you can specify your IP range, decide which instances are exposed to the internet and which remain private.

Start Today
As I mentioned earlier, now’s a great time to start the planning process and we are here to help you. We anticipate that you will have questions and may want some help with this, so to get started read our essential Windows Server 2003 FAQ as well as the Windows Servers Server 2003 End-Of-Support page which cover many more details on this transition. We realize that your migration away from Windows Server 2003 can be challenging, hopefully AWS can be there to help ease this transition.

— Bryan Nairn (Senior Product Manager) and Niko Pamboukas (Senior Product Manager)

Saturday, September 12, 2015

User Defined Functions for Amazon Redshift

The Amazon Redshift team is on a tear. They are listening to customer feedback and rolling out new features all the time! Below you will find an announcement of another powerful and highly anticipated new feature.

— Jeff;


Amazon Redshift makes it easy to launch a petabyte-scale data warehouse. For less than $1,000/Terabyte/year, you can focus on your analytics, while Amazon Redshift manages the infrastructure for you. Amazon Redshift’s price and performance has allowed customers to unlock diverse analytical use cases to help them understand their business. As you can see from blog posts by Yelp, Amplitude and Cake, our customers are constantly pushing the boundaries of what’s possible with data warehousing at scale.

To extend Amazon Redshift’s capabilities even further and make it easier for our customers to drive new insights, I am happy to announce that Amazon Redshift has added scalar user-defined functions (UDFs). Using PostgreSQL syntax, you can now create scalar functions in Python 2.7 custom-built for your use case, and execute them in parallel across your cluster.

Here’s a template that you can use to create your own functions:

CREATE [ OR REPLACE ] FUNCTION f_function_name 
( [ argument_name arg_type, ... ] )
RETURNS data_type
{ VOLATILE | STABLE | IMMUTABLE }
AS $$
  python_program
$$ LANGUAGE plpythonu;

Scalar UDFs return a single result value for each input value, similar to built-in scalar functions such as ROUND and SUBSTRING. Once defined, you can use UDFs in any SQL statement, just as you would use our built-in functions.

In addition to creating your own functions, you can take advantage of thousands of functions available through Python libraries to perform operations not easily expressed in SQL. You can even add custom libraries directly from S3 and the web. Out of the box, Amazon Redshift UDFs come integrated with the Python Standard Library and a number of other libraries, including:

  • NumPy and SciPy, which provide mathematical tools you can use to create multi-dimensional objects, do matrix operations, build optimization algorithms, and run statistical analyses.
  • Pandas, which offers high level data manipulation tools built on top of NumPy and SciPy, and that enables you to perform data analysis or an end-to-end modeling workflow.
  • Dateutil and Pytz, which make it easy to manipulate dates and time zones (such as figuring out how many months are left before the next Easter that occurs in a leap year).

UDFs can be used to simplify complex operations. For example, if you wanted to extract the hostname out of a URL, you could use a regular expression such as:

SELECT REGEXP_REPLACE(url, '(https?)://([^@]*@)?([^:/]*)([/:].*|$)', ‘\3') FROM table;

Or, you could import a Python URL parsing library, URLParse, and create a function that extracts hostnames:

CREATE FUNCTION f_hostname(url VARCHAR)
RETURNS varchar
IMMUTABLE AS $$
import urlparse
return urlparse.urlparse(url).hostname
$$ LANGUAGE plpythonu;

Now, in SQL all you have to do is:

SELECT f_hostname(url) 
FROM table;

As our customers know, Amazon Redshift obsesses about security. We run UDFs inside a restricted container that is fully isolated. This means UDFs cannot corrupt your cluster or negatively impact its performance. Also, functions that write files or access the network are not supported. Despite being tightly managed, UDFs leverage Amazon Redshift’s MPP capabilities, including being executed in parallel on each node of your cluster for optimal performance.

To learn more about creating and using UDFs, please see our documentation and a detailed post on the AWS Big Data blog. Also, check out this how-to guide from APN Partner Looker. If you’d like to share the UDFs you’ve created with other Amazon Redshift customers, please reach out to us at redshift-feedback@amazon.com. APN Partner Periscope has already created a number of useful scalar UDFs and published them here.

We will be patching your cluster with UDFs over the next two weeks, depending on your region and maintenance window setting. The new cluster version will be 1.0.991. We know you’ve been asking for UDFs for some time and would like to thank you for your patience. We look forward to hearing from you about your experience at redshift-feedback@amazon.com.

Tina Adams, Senior Product Manager

Tuesday, September 8, 2015

The AWS Pop-up Lofts are opening in London and Berlin

Amazon Web Services (AWS) has been working closely with the startup community in London, and Europe, since we launched back in 2006. We have grown substantially in that time and today more than two thirds of the UK’s startups with valuations of over a billion dollars, including Skyscanner, JustEat, Powa, Fanduel and Shazam, are all leveraging our platform to deliver innovative services to customers around the world.

This week I will have the pleasure of meeting up with our startup customers to we celebrate the opening of the first of the AWS Pop-up Lofts to open outside of the US in one of the greatest cities in the World, London. The London Loft opening will be followed in quick succession by our fourth Pop-up Loft opening its doors in Berlin. Both London and Berlin are vibrant cities with a concentration of innovative startups building their businesses on AWS. The Loft’s will give them a physical place to not only learn about our services but will aim to help cultivate a community of AWS customers that can learn from each other.

Every time I’ve visited the Loft’s in both San Francisco and New York there has been a great buzz with people getting advice from our solution architects, getting training or attending talks and demos. By opening the London and Berlin Loft’s we’re hoping to cultivate that same community and expand on the base of loyal startups we have, such as Hailo, YPlan, SwiftKey, Mendley, GoSquared, Playmob and Yoyo Wallet, to help them to grow their companies globally and be successful.

You can expect to see some of the brightest and most creative minds in the industry being on hand in the Lofts to help and I’d encourage all local startups to make the most of the resources which will be at your fingertips, ranging from technology resources through access to our vast network of customers, partners, accelerators, incubators and venture capitalists who will all be in the loft to help you gain the insight you need and provide advice on how to secure funding, and gain the ‘softer skills’ needed to to grow your businesses.

The AWS Pop-up Loft, in London will be open from September 10 to October 29 between 10am and 6pm and later for evening events, Monday through Friday, in Moorgate. You can go online now at http://awsloft.london, to make one-on-one appointments with an AWS expert, register for boot camps and technical sessions, including:

  • Ask an Architect: an hour session which can be scheduled with a member of the AWS technical team. Bring your questions about AWS architecture, cost optimisation, services and features, or anything else AWS related. You can also drop in if you don’t have an appointment.
  • Technical Bootcamps: a one-day training sessions, taught by experienced AWS instructors and solutions architects. You will get hands-on experience using a live environment with the AWS Management Console. There is a ‘Getting started with AWS’ bootcamp on Chef bootcamp which will show customers how they can safeguard their infrastructure, manage complexity, and accelerate time to market.
  • Self-paced Hands-on Labs: beginners through advanced users can attend the labs which will help sharpen AWS technical skills at a personal pace and are available for free in the Loft during operating hours.

The London Loft will also feature an IoT Lab with a range of devices running on AWS services, many of which have been developed by our Solutions Architects. Visitors to the Loft will be able to participate in live demos and Q&A opportunities, as our technical team demonstrates what is possible with IoT on AWS.

You are all invited to join us for the grand opening party at the Loft in London on September 10 at 6PM. There will be food, drinks, DJ, and free swag. The event will be packed, so RSVP today if you want to come and mingle with hot startups, accelerators, incubators, VCs, and our AWS technical experts. Entrance is on a first come, first serve basis.

Look out for more details on the Berlin Loft, which will follow soon. I look forward to seeing you in new European Lofts in the coming weeks!