AWS in 2015: Why You Need to Switch from PV to HVM
Until now, most average Amazon (AWS) users could effectively ignore the distinction between paravirtual instances (PV) and hardware assisted virtual instances (HVM). HVM was a technology reserved for high-performance, large-capacity use cases. EG: 10Gbe LAN connection, >64 GB RAM, etc. But with changes being made in AWS, all users can and should be considering HVM for their instances.
In the past, PV offered better performance in the Xen hypervisor than HVM. While the performance penalty for HVM was not large, it was significant — but it had to be used in certain cases where PV technology posed a bottleneck to the underlying power of the host system. The extra kernel layer imposed by PV was an issue.
As the AWS platform has evolved, the penalty for HVM has grown smaller in comparison to PV. This is due to multiple factors: Improvements to the Xen hypervisor, newer generation CPUs with new instruction sets, EC2 driver improvements, and overall infrastructure changes in AWS. Amazon has confirmed the same in their documentation.
Now the performance gap has grown so small that in many cases an HVM instance will outperform a PV instance of the same size and spec. Again, the difference isn't dramatic for small instances, but if you use HVM to take advantage of larger or more specialized instance types the gains are huge.
Performance-aside, some instances (and soon some entire regions) are only available as HVM. Depending on the use-case, these can allow for significant cost savings, or better performance at the same cost.
HVM Exclusive Instances
Restricting yourself to HVM may initially limit the number of AMIs available to you, it will dramatically increase the number of instance types you can choose from. Here are some of the more exciting options made available by HVM:
T2 Instances — Bursty CPU Loads
In early July, AWS launched a new instance type, T2, that offers low to moderate baseline CPU performance with bursting capabilities. For a comparable instance with the same memory and peak CPU rate, the cost is dramatically less. These new instances are actually the cheapest instances now available. They are only available as HVM instances.
The bursting capabilities are handled with an innovative token system — The instance is assigned a baseline CPU of 20%-40% (according to instance size) and any time the usage is below the baseline, the instance accumulates CPU-minute tokens. If the instance load goes up above the baseline, it will consume its accumulated tokens until they are depleted. When all tokens are depleted, the maximum CPU performance is throttled down to the baseline value.
Suggested use-cases: Any instance not consistently using a constant 20%-40% of the CPU available; development & staging servers; production servers with spikey traffic and load patterns.
R3 Instances - High Memory for NoSQL
With the growing popularity of NoSQL databases like MongoDB, the demand for high memory instances is growing. MongoDB lives and dies by RAM. Once running on an instance, Mongo will reserve every free MB of memory it can*. After doing so, it self-optimizes by placing a working set of data as well as indexes into RAM. More memory on the system means more indexes and a larger working set can fit in memory.
*If MongoDB is running on an instance with other processes that subsequently need RAM, MongoDB will concede its reserved memory to the other process. A system running with MongoDB on it will almost always show 100% RAM usage.
The previous generation AWS instances had limited options for affordable high memory instances. This year the R3 family of instances were introduced, and brought the cost per GB way down. The R3 family of instances is HVM only.
The smallest (r3.large) comes with 15GB of ram, and costs 40% less than the next cheapest instance of any family with 15GB of ram (m3.xlarge). The tradeoff is that while this r3.large has the same ram as an m3.xlarge, it only has the same amount of CPU as an m3.large.
Nearly all of OPSWAT's products rely on MongoDB. The performance is great, and ability to change the schema as our products evolve is really valuable. When we switched our instances from M1 to R3, the performance increases were huge -- even while reducing our total monthly spend.
Be sure to check the size of your MongoDB working set before selecting a new instance size so you don't undersize or oversize.
Enhanced Networking Instances
In addition to certain instance families only being available in HVM, there are some features only available for HVM. One of which is enhanced networking. Networking performance of EC2 instances is according to the size and family of the instance. Unfortunately it isn't quite as clearly benchmarked as other resources like memory and CPU. AWS provides a guideline with generic terms like "low", "medium", "high" for network performance. Some googling will turn up measurements made by the community to put some real numbers to these terms.
If the LAN performance of your instance is insufficient, as long as it's HVM, you can enable enhanced networking for no additional cost. In addition to increasing throughput, it improves the quality of the connection by decreasing jitter.
One thing to note, for any instance where network traffic is critical, you should ensure that EBS Optimization is enabled. Because EBS volumes are LAN-connected, they can compete for available bandwidth with the other networking operations of the OS. Enabling EBS Optimization creates a dedicated network connection for the EBS volumes, segregating that traffic from the normal network traffic.
For doing cluster computing, Enhanced Networking is a must-have.
Other Reasons to Switch to HVM
In addition to the unique instance types mentioned above, there are some other compelling reasons to make the move to HVM:
"Snappy" Ubuntu is an HVM exclusive
In December, AWS made Canonical's new "Snappy" edition of Ubuntu Core available in EC2. Designed for Docker and the cloud in general, Snappy is an exciting new option for Linux developers. It's fast, purpose-built and forward-thinking. And it's only available on HVM instances.
Frankfurt Germany is basically HVM only
With the launch of the EC2 Frankfurt Germany region, Amazon has limited the types and sizes of instances available to run as PV. For all practical purposes, anyone considering deployments in Frankfurt must be running HVM.
N2W Software wrote a great post on this topic recently.
There's no turning back
AWS is providing newer and better offerings for HVM to the exclusion of PV. Even if you don't have an immediate need to use HVM, it's not always easy or possible to convert from PV later. There's no additional cost, so you should make HVM your default choice for any new EC2 instances you deploy.
Okay, so how?
For new instances, it's easy. Just make sure the AMI is labeled HVM. Done.
For existing instances, it's a bit more involved. Assuming you already have EC2 instances that are PV but could benefit from being HVM. There are two choices for this scenario:
- Rebuild your servers from a fresh HVM AMI
- Convert your PV servers to HVM
For option #1, there are several available HVM AMIs. A quick google search should help you find one. For some reason they don't always come up when doing a direct search in the Amazon Marketplace.
For option #2, depending on your current AMI, the process can be easy or quite complex.
Converting Amazon Linux
Migration from PV to HVM is generally easier in an Amazon Linux AMI, especially since Amazon offers better support of this OS compared to CentOS — despite being binary compatible. There are some reports that with an Amazon Linux instance, EBS-backed, that the root drive can be detached from a PV instance, attached to an HVM instance, and a new AMI created (http://serverfault.com/questions/439976/create-an-aws-hvm-linux-ami-from-an-existing-paravirtual-linux-ami). I have not personally tested this, but it seems like reliable information.
Converting Windows
An upgrade script is available here: http://aws.amazon.com/developertools/2187524384750206
+1 for Windows Server (they've should win occasionally, right?)
Converting Other Operations Systems
With the popularity of AWS, migration paths can be found for most OS simply by searching the AWS forum or a search engine. Follows is a specific guide on certing CentOS images:
Converting CentOS
I had about 16 CentOS servers in operation that I needed to convert. Some were production, others were staging servers. I started with the staging servers and created the instructions below. By the end of the conversion project it became a very simple background task — only requiring limited interaction. Most of the time is spent waiting for dd copy, AMI creation, snapshots, etc.
One benefit of the method given below is that the final result has the same hostname, LAN IP, WAN IP, etc. as the starting server.
Requirements
- Instance must be EBS-backed (EBS root drive)
- Grub must be installed
- A 'working' EC2 instance with Amazon Linux
- Sufficient privileges in AWS
- Root privileges for the original instance
- CentOS of course
While these instructions are specifically tested on CentOS, they should work on any RHEL-based distribution. As always, take snapshots before making any significant changes.
Summary
- Strip the marketplace code (if any) from the source EBS volume.
- Attach the source and destination volumes to the working instance.
- Partition the destination instance and dd the data from the source volume to the destination volume.
- Make required changes to the configuration files on the destination volume and install grub.
- Create a snapshot of the destination volume and register an AMI.
- Create a new HVM instance from the AMI.
- Clean up temporary volumes and instance
Preparation
These instructions adapted from a post on the AWS forum [1]
- Install grub on the source instance
a. sudo yum install grub -y - Create an EC2 instance in the same AZ, using Amazon Linux 64-bit. (m3.medium HVM should be fine for this purpose)
- Check the root EBS volume to see if it has a marketplace product code. If it does, AWS will likely block you from attaching it to another instance unless it's the root volume. That won't work for us, so we need to remove the restriction:
a. Inspect by launching the AWS console, go the EBS: Volumes, highlight the volume, look in the description at the 'Product codes' field. If it says 'marketplace ….' Then most likely you cannot simply reattach this volume.
b. Confirm by launching an EBS-backed Amazon Linux instance in the same AZ. Stop the source CentOS instance. Detach the volume from the source. Try to attach it the new Amazon Linux instance. If you receive an error, then you will need to strip the marketplace code - Stripping the marketplace code (if necessary):
a. Option 1: Ask Amazon! Just go on the AWS forum, search for similar requests to see the information that Amazon requested (probably EBS ID, or AMI number) and ask them to kindly remove the marketplace code.
b. Option 2 — four steps:
i. Create a new EBS volume in the same AZ as the source (size equal or greater). I suggest using an SSD
(general purpose) class volume to accelerate the process
ii. Attach this empty volume to the source instance
iii. SSH to the source instance
iv. Use dd command to copy the source root drive to the new empty volume
sudo dd if=/dev/<source> of=/dev/<destination> bs=4k conv=notrunc,noerror,sync
v. Wait a while for it to complete — you can check status by logging in with another console, and using sudo killall —USR1 dd It will print the progress in the original console (where you initiated dd) - If you want to minimize any changes on the source instance from now until the migration is complete, you should now stop the source instance. Otherwise just leave it running until you are ready to switch over.
- Detach the newly created (and cloned) volume from the source instance and attach it to the Amazon 'working' instance as /dev/sdm
- Create another EBS volume of the same or equal size, attach it to the Amazon Linux 'working' server as /dev/sdo
a. Before you go much further, I suggest creating an AMI of the original PV instance as a backup. It can take a while to make the AMI, so if you do it now it will be ready by the time you need to terminate the instance. - SSH to the Amazon Linux server
- Change to root
sudo su - Partition the 'destination' volume.
parted /dev/xvdo --script 'mklabel msdos mkpart primary 1M -1s print quit'
partprobe /dev/xvdo
udevadm settle - Check the source file system using fsck.ext4 or whatever is appropriate for your source
e2fsck -f /dev/xvdm - Minimize the original filesystem to speed up the copy
resize2fs -M /dev/xvdm - Observe the output from the resize2fs and note it down for the next step. For example:
# Resizing the file system on /dev/xvdm to 269020 (4k) blocks.
# The file system on /dev/xvdm is now 269020 blocks long. - Duplicate 'source' to 'destination' volume.
dd if=/dev/xvdm of=/dev/xvdo1 bs=<block size from previous step> count=<use block count from last step> - Expand the FS on the new partition
resize2fs /dev/xvdo1 - Prepare the 'destination' volume for the chrooted grub installation
mount /dev/xvdo1 /mnt
cp -a /dev/xvdo /dev/xvdo1 /mnt/dev/
rm -f /mnt/boot/grub/*stage*
cp /mnt/usr/*/grub/*/*stage* /mnt/boot/grub/
rm -f /mnt/boot/grub/device.map - Install grub in the chroot environment. This step will do an offline grub installation on the destination device, which is required for the HVM instance:
cat <<EOF | chroot /mnt grub --batch
device (hd0) /dev/xvdo
root (hd0,0)
setup (hd0)
EOF - Remove the temporary device from the destination volume, which was required to install grub (as above)
rm -f /mnt/dev/xvdo /mnt/dev/xvdo1 - Update the grub configuration
vi /mnt/boot/grub/grub.conf
- Change root (hd0) to root (hd0,0)
- Add (or replace console=*) console=ttyS0 to kernel line
- Replace root=* with root=LABEL=/ in the kernel line
- Add xen_pv_hvm=enable to kernel line (No longer required, but I still add it)
If there are two entries for CentOS, make these modifications to both. This probably isn't needed, but I did it anyway.
Note: If the grub.conf you are viewing should match the source OS. If it says Amazon Linux, then you are looking at the grub.conf for the 'working' server instead. This can happen if you try to edit menu.lst instead of grub.conf because of the symbolic link - Update fstab
vi /mnt/etc/fstab
LABEL=/ / ext4 defaults,noatime 1 1
none /dev/pts devpts gid=5,mode=620 0 0
none /dev/shm tmpfs defaults 0 0
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0
(keep or comment any other attached volumes) - Create the label on /dev/xvdo1 and unmount device.
e2label /dev/xvdo1 /
sync
umount /mnt - Return to the AWS console. Find the new 'destination' volume and make a snapshot.
- Create a new HVM AMI from this snapshot. Be sure to select virtualization type = HVM. Block devices should be fine
- Launch a new instance from this AMI and confirm it works
- Make an AMI from the existing PV instance. Note its elastic IP and LAN IP
- Terminate the existing PV instance (terminate, not stop). This will free the LAN IP to be reassignable to your new HVM instance. This is only required if you want to maintain the same static IP in the LAN.
- Launch a new instance from the HVM AMI, use the LAN IP from the old instance, and assign the elastic IP by using the network interface id
References
[1] ChrisC@AWS, "AWS Developer Forums: Convert CentOS PV to HVM," [Online]. Available: https://forums.aws.amazon.com/thread.jspa?threadID=155526&tstart=25. [Accessed 4 July 2014].

- ファイルアップロードの保護 – 10 のベストプラクティスで サイバー攻撃を防御
- MetaDefenderによる世界で最も危険なマルウェアEmotetの防御
- OPSWAT Expands Global Availability of Critical Infrastructure Protection
- OPSWAT Announces Expansion of Cybersecurity Training Program
- Avoiding storage data leaks and PII regulation noncompliance
- How OPSWAT Can Help Detect and Prevent the VMware WorkSpace ONE Access exploit (CVE-2020-4006)
- Protecting Critical Infrastructure from Advanced Cyberattacks
- MetaDefender Cloud Hash Reputation Database Now Exceeds 40 Billion
- OPSWAT Continues to Expand OESIS Framework with New Partners
- 6 Potential Security Gaps in File Transfer Process for Critical Infrastructure