The Microsoft Virtual Machine Converter 2.0

Microsoft Virtual Machine Converter (MVMC) 2.0, a supported, freely available solution for converting VMware-based virtual machines and virtual disks to Hyper-V-based virtual machines and virtual hard disks (VHDs).

MVMC can be deployed with minimal dependencies. Because MVMC provides native support for Windows PowerShell®, it enables scripting and integration with data center automation workflows such as those authored and run within Microsoft System Center Orchestrator 2012 R2. It can also be invoked through the Windows PowerShell® command-line interface. The solution is simple to download, install, and use. In addition to the Windows PowerShell capability, MVMC provides a wizard-driven GUI to facilitate virtual machine conversion.

MVMC 2.0Migration-of-a-VM-with-MVMC-2.0_thumb

With the release, you will be able to access many updated features including:

  • Added support for vCenter & ESX(i) 5.5
  • VMware virtual hardware version 4 – 10 support
  • Linux Guest OS migration support including CentOS, Debian, Oracle, Red Hat Enterprise, SuSE enterprise and Ubuntu.

Microsoft has also added two great new features:

  • On-Premises VM to Azure VM conversion: You can now migrate your VMware virtual machines straight to Azure. Ease your migration process and take advantage of Microsoft’s cloud infrastructure with a simple wizard driven experience.
  • PowerShell interface for scripting and automation support: Automate your migration via workflow tools including System Center Orchestrator and more. Hook MVMC 2.0 into greater processes including candidate identification and migration activities.

At this time, Microsoft is also announcing the expected availability of MVMC 3.0 in fall of 2014. In that release we will be providing physical to virtual (P2V) machine conversion for supported versions of Windows.

For more information about the MVMC 2.0 solution including how to download, make sure you visit here.

Summary

With Windows Server 2012 R2 Hyper-V and System Center 2012 R2, Microsoft has a solution to enable customers to virtualize their key, mission critical workloads and realize significant savings compared to VMware. Hyper-V enables customers to run their largest workloads. It offers massive host, VM and cluster scalability. It provides powerful storage, networking, and automation features that enterprises and service providers demand. With a number of supported tools, you have many options available to test and continue your migration to Hyper-V.

vSphere 4.1 features list!

Source; http://virtualization.info/en/news/2010/07/release-vmware-vsphere-4-1.html

As expected, VMware releases today a significant update for its vSphere virtual infrastructure.

vSphere 4.1 introduces an impressive number of new features that virtualization.info partially unveiled in May:

  • Scripted Install for ESXi. Scripted installation of ESXi to local and remote disks allows rapid deployment of ESXi to many machines. You can start the scripted installation with a CD-ROM drive or over the network by using PXE booting.
  • vSphere Client Removal from ESX/ESXi Builds. For ESX and ESXi, the vSphere Client is available for download from the VMware Web site. It is no longer packaged with builds of ESX and ESXi.
  • Boot from SAN. vSphere 4.1 enables ESXi boot from SAN (BFN). iSCSI, FCoE, and Fibre Channel boot are supported.
  • Hardware Acceleration with vStorage APIs for Array Integration (VAAI). ESX can offload specific storage operations to compliant storage hardware. With storage hardware assistance, ESX performs these operations faster and consumes less CPU, memory, and storage fabric bandwidth.
  • Storage Performance Statistics. vSphere 4.1 offers enhanced visibility into storage throughput and latency of hosts and virtual machines, and aids in troubleshooting storage performance issues. NFS statistics are now available in vCenter Server performance charts, as well as esxtop. New VMDK and datastore statistics are included. All statistics are available through the vSphere SDK.
  • Storage I/O Control. This feature provides quality-of-service capabilities for storage I/O in the form of I/O shares and limits that are enforced across all virtual machines accessing a datastore, regardless of which host they are running on. Using Storage I/O Control, vSphere administrators can ensure that the most important virtual machines get adequate I/O resources even in times of congestion.
  • iSCSI Hardware Offloads. vSphere 4.1 enables 10Gb iSCSI hardware offloads (Broadcom 57711) and 1Gb iSCSI hardware offloads (Broadcom 5709).
  • Network I/O Control. Traffic-management controls allow flexible partitioning of physical NIC bandwidth between different traffic types, including virtual machine, vMotion, FT, and IP storage traffic (vNetwork Distributed Switch only).
  • IPv6 Enhancements. IPv6 in ESX supports Internet Protocol Security (IPsec) with manual keying.
  • Load-Based Teaming. vSphere 4.1 allows dynamic adjustment of the teaming algorithm so that the load is always balanced across a team of physical adapters on a vNetwork Distributed Switch.
  • E1000 vNIC Enhancements. E1000 vNIC supports jumbo frames in vSphere 4.1.
  • Windows Failover Clustering with VMware HA. Clustered Virtual Machines that utilize Windows Failover Clustering/Microsoft Cluster Service are now fully supported in conjunction with VMware HA.
  • VMware HA Scalability Improvements. VMware HA has the same limits for virtual machines per host, hosts per cluster, and virtual machines per cluster as vSphere.
  • VMware HA Healthcheck and Operational Status. The VMware HA dashboard in the vSphere Client provides a new detailed window called Cluster Operational Status. This window displays more information about the current VMware HA operational status, including the specific status and errors for each host in the VMware HA cluster.
  • VMware Fault Tolerance (FT) Enhancements. vSphere 4.1 introduces an FT-specific versioning-control mechanism that allows the Primary and Secondary VMs to run on FT-compatible hosts at different but compatible patch levels. vSphere 4.1 differentiates between events that are logged for a Primary VM and those that are logged for its Secondary VM, and reports why a host might not support FT. In addition, you can disable VMware HA when FT-enabled virtual machines are deployed in a cluster, allowing for cluster maintenance operations without turning off FT.
  • DRS Interoperability for VMware HA and Fault Tolerance (FT). FT-enabled virtual machines can take advantage of DRS functionality for load balancing and initial placement. In addition, VMware HA and DRS are tightly integrated, which allows VMware HA to restart virtual machines in more situations.
  • Enhanced Network Logging Performance. Fault Tolerance (FT) network logging performance allows improved throughput and reduced CPU usage. In addition, you can use vmxnet3 vNICs in FT-enabled virtual machines.
  • Concurrent VMware Data Recovery Sessions. vSphere 4.1 provides the ability to concurrently manage multiple VMware Data Recovery appliances.
  • vStorage APIs for Data Protection (VADP) Enhancements. VADP now offers VSS quiescing support for Windows Server 2008 and Windows Server 2008 R2 servers. This enables application-consistent backup and restore operations for Windows Server 2008 and Windows Server 2008 R2 applications.
  • vCLI Enhancements. vCLI adds options for SCSI, VAAI, network, and virtual machine control, including the ability to terminate an unresponsive virtual machine. In addition, vSphere 4.1 provides controls that allow you to log vCLI activity.
  • Lockdown Mode Enhancements. VMware ESXi 4.1 lockdown mode allows the administrator to tightly restrict access to the ESXi Direct Console User Interface (DCUI) and Tech Support Mode (TSM). When lockdown mode is enabled, DCUI access is restricted to the root user, while access to Tech Support Mode is completely disabled for all users. With lockdown mode enabled, access to the host for management or monitoring using CIM is possible only through vCenter Server. Direct access to the host using the vSphere Client is not permitted.
  • Access Virtual Machine Serial Ports Over the Network. You can redirect virtual machine serial ports over a standard network link in vSphere 4.1. This enables solutions such as third-party virtual serial port concentrators for virtual machine serial console management or monitoring.
  • vCenter Converter Hyper-V Import. vCenter Converter allows users to point to a Hyper-V machine. Converter displays the virtual machines running on the Hyper-V system, and users can select a powered-off virtual machine to import to a VMware destination.
  • Enhancements to Host Profiles. You can use Host Profiles to roll out administrator password changes in vSphere 4.1. Enhancements also include improved Cisco Nexus 1000V support and PCI device ordering configuration.
  • Unattended Authentication in vSphere Management Assistant (vMA). vMA 4.1 offers improved authentication capability, including integration with Active Directory and commands to configure the connection.
  • Updated Deployment Environment in vSphere Management Assistant (vMA). The updated deployment environment in vMA 4.1 is fully compatible with vMA 4.0. A significant change is the transition from RHEL to CentOS.
  • vCenter Orchestrator 64-bit Support. vCenter Orchestrator 4.1 provides a client and server for 64-bit installations, with an optional 32-bit client. The performance of the Orchestrator server on 64-bit installations is greatly enhanced, as compared to running the server on a 32-bit machine.
  • Improved Support for Handling Recalled Patches in vCenter Update Manager. Update Manager 4.1 immediately sends critical notifications about recalled ESX and related patches. In addition, Update Manager prevents you from installing a recalled patch that you might have already downloaded. This feature also helps you identify hosts where recalled patches might already be installed.
  • License Reporting Manager. The License Reporting Manager provides a centralized interface for all license keys for vSphere 4.1 products in a virtual IT infrastructure and their respective usage. You can view and generate reports on license keys and usage for different time periods with the License Reporting Manager. A historical record of the utilization per license key is maintained in the vCenter Server database.
  • Power Management Improvements. ESX 4.1 takes advantage of deep sleep states to further reduce power consumption during idle periods. The vSphere Client has a simple user interface that allows you to choose one of four host power management policies. In addition, you can view the history of host power consumption and power cap information on the vSphere Client Performance tab on newer platforms with integrated power meters.
  • Reduced Overhead Memory. vSphere 4.1 reduces the amount of overhead memory required, especially when running large virtual machines on systems with CPUs that provide hardware MMU support (AMD RVI or Intel EPT).
  • DRS Virtual Machine Host Affinity Rules. DRS provides the ability to set constraints that restrict placement of a virtual machine to a subset of hosts in a cluster. This feature is useful for enforcing host-based ISV licensing models, as well as keeping sets of virtual machines on different racks or blade systems for availability reasons.
  • Memory Compression. Compressed memory is a new level of the memory hierarchy, between RAM and disk. Slower than memory, but much faster than disk, compressed memory improves the performance of virtual machines when memory is under contention, because less virtual memory is swapped to disk.
  • vMotion Enhancements. In vSphere 4.1, vMotion enhancements significantly reduce the overall time for host evacuations, with support for more simultaneous virtual machine migrations and faster individual virtual machine migrations. The result is a performance improvement of up to 8x for an individual virtual machine migration, and support for four to eight simultaneous vMotion migrations per host, depending on the vMotion network adapter (1GbE or 10GbE respectively).
  • ESX/ESXi Active Directory Integration. Integration with Microsoft Active Directory allows seamless user authentication for ESX/ESXi. You can maintain users and groups in Active Directory for centralized user management and you can assign privileges to users or groups on ESX/ESXi hosts. In vSphere 4.1, integration with Active Directory allows you to roll out permission rules to hosts by using Host Profiles.
  • Configuring USB Device Passthrough from an ESX/ESXi Host to a Virtual Machine. You can configure a virtual machine to use USB devices that are connected to an ESX/ESXi host where the virtual machine is running. The connection is maintained even if you migrate the virtual machine using vMotion.
  • Improvements in Enhanced vMotion Compatibility. vSphere 4.1 includes an AMD Opteron Gen. 3 (no 3DNow!) EVC mode that prepares clusters for vMotion compatibility with future AMD processors. EVC also provides numerous usability improvements, including the display of EVC modes for virtual machines, more timely error detection, better error messages, and the reduced need to restart virtual machines.
  • vCenter Update Manager Support for Provisioning, Patching, and Upgrading EMC’s ESX PowerPath Module. vCenter Update Manager can provision, patch, and upgrade third-party modules that you can install on ESX, such as EMC’s PowerPath multipathing software. Using the capability of Update Manager to set policies using the Baseline construct and the comprehensive Compliance Dashboard, you can simplify provisioning, patching, and upgrade of the PowerPath module at scale.
  • User-configurable Number of Virtual CPUs per Virtual Socket. You can configure virtual machines to have multiple virtual CPUs reside in a single virtual socket, with each virtual CPU appearing to the guest operating system as a single core. Previously, virtual machines were restricted to having only one virtual CPU per virtual socket.
  • Expanded List of Supported Processors. The list of supported processors has been expanded for ESX 4.1. Among the supported processors is the Intel Xeon 7500 Series processor, code-named Nehalem-EX (up to 8 sockets).

More than that, with vSphere 4.1 VMware is enriching its offering for the SMB market, adding VMotion to the Essential Plus license:

vSphere41_SKUs

Microsoft’s Hyper-V R2 vs. VMware’s vSphere: A feature comparison

VMware and Microsoft are ramping up their virtualization games with relatively new releases. Scott Lowe compares and contrasts some of the major features in vSphere and Hyper-V R2.

Source: http://blogs.techrepublic.com.com/datacenter/?p=1820

Microsoft was late to the virtualization game, but the company has made gains against its primary competitor in the virtualization marketplace, VMware. In recent months, both companies released major updates to their respective hypervisors: Microsoft’s Hyper-V R2 and VMware’s vSphere. In this look at the hypervisor products from both companies, I’ll compare and contrast some of the products’ more common features and capabilities. I do not, however, make recommendations about which product might be right for your organization.

Table A compares items in four editions of vSphere and three available editions of Hyper-V R2. Below the table, I explain each of the comparison items. (Product note: With the release of vSphere, VMware has released an Enterprise Plus edition of its hypervisor product. Enterprise Plus provides an expanded set of capabilities that were not present in older product versions. Customers have to upgrade from Enterprise to Enterprise Plus in order to obtain these capabilities.)

Table A

Continue reading

System Center Virtual Machine Manager 2008 R2 RTM!

http://techlog.org/images/vmm_2008.png

Zane Adam: System Center Virtual Machine Manager 2008 R2 has RTM’d and GA via volume licensing is set for October 1. This is great news for all and I’d like to especially thank our VMM 2008 R2 Development, Product Management, and Test teams. Lots of hard work fueled by their passion in virtualization and management has resulted in a very good software release.

A 180-day evaluation version is now available, too, on the Microsoft Download site. You can access it here.

Please experience for yourself what the 10,000+ people who have previously downloaded our ‘Release Candidate’ plus organizations such as Continental Airlines, Lionbridge Technologies, and Indiana University have seen with VMM 2008 R2!

I encourage everyone to explore the new System Center Virtual Machine Manager 2008 R2 and its new features such as quick storage migration, live migration, and many others. We even offer support for vSphere 4.

To learn more on the new features and capabilities of VMM2008 R2, please try to attend our upcoming TechNet session ‘Technical Overview of System Center Virtual Machine Manager 2008 R2’. Presented by our Technical Product Manager Kenon Owens, it will be chocked full of new and cool VMM 2008 R2 items. Go here to register for this Wednesday, September 09, 2009 (10:00 AM Pacific) event.

Source : http://techlog.org/archive/2009/08/24/system_center_virtual_machine_

Manage your VMware environment from your Iphone

Want to manage your VMware virtual environment on the go? Now there’s an app for that! VManage is an application developed to allow the IT administrator to view critical environment data about their virtual infrastructure as well as perform fundamental tasks such as VMotion’ing from anywhere at any time. Viewing basic performance data (more advanced data to come) is as easy as selecting a Virtual Machine or Host and examining the details. Simply add a Virtual Center server address, credentials and a VPN if necessary and that’s it. So if you’re an IT administrator who doesn’t spend every waking moment in front of your PC, this is the tool for you.

Environment Configuration Note:
The Virtual Center server by default exposes port 443 for the web service. This port will need to be available to the iPhone/iPod Touch in order for the VManage application to be able to interact with it. This can be achieved via a VPN or exposing the port to the web.

Application Configuration Note:
The iPhone/iPod Touch settings application needs to be set as follows …

server: https:///sdk                 **required**
domain: AD Domain Name    **optional**
username: AD User                   **required**
password: AD password          **required**

VMware vSphere

There’s hardly any point in covering the announcements of today. There are so many people blogging right now that no one will have the chance to keep up with reading. That’s why I decided not to write or copy any of the announcements. Of course I just might give my thoughts on the webcast this evening but that’s probably it… Anyway, I divided it up in two major sections “News” and “Previews” and within these sections VMware and of course “Bloggers Community”. I will keep updating this post, make sure to visit it again.

Continue reading

How to convert VMWare image to Hyper-V images?

Here’s a small how-to based on my experiences:

1) Uninstall VM tools from your VM

2) Shutdown the VM

If your VMs are based on SCSI drives (like mine were – because VMware recommends SCSI) and the operating systems are Windows XP, 2003 or earlier then you have to add the IDE driver to your VM before you shut it down in VMware.

Otherwise you will end up with a converted VM that starts up in Hyper-V with a blue screen of death (BSOD) and 0x0000007B – “Inaccessible Boot Device” error. This is due to the fact that your converted VM will have no Primary IDE Channel and Hyper-V will presume that your converted disk is IDE type and located on the Primary IDE Channel.

Doing a Windows Repair Install can fix the 0x7B Inaccessible Boot Device error – but it’s both time consuming and the result might not be good. (Believe me – I had to redo a migration of a SharePoint installation because a Windows Repair Install messed it up. Luckily I then came up with the solution described below instead).

Please note that adding a temporary IDE disk to your VM is not necessary with VMs running Windows Vista or Windows 2008 – they seem to detect the Primary IDE Channel during initial boot phase.

3) Add a new IDE disk drive to your VM: (any size will do)

Make sure that you select “Adapter: IDE 0 Device: 0” under “Virtual Device Node” while creating the new disk (otherwise you might end up with yet another SCSI disk)

4) Boot up your virtual machine with both drives connected and check that it detects your new IDE drive (along with a primary IDE channel and a disk device driver). You should be able to see the new drive as "not initialized" in Disk Management.

5) Power off your virtual machine and remove the newly created IDE disk from your VM (you can delete it from disk as well). Do not power on your VMware Machine again!

6) Now convert your VMDK file to VHD format using the newest Vmdk2Vhd utility (currently version 1.0.13) that can be downloaded from http://vmtoolkit.com.

7) You can now uninstall VMware Server and install Hyper-V + current Windows Updates on your host server

8) Create a new Virtual Machine in Hyper-V. Make sure you select “Use an existing virtual hard disk” and select the VHD file that you just created.

9) Power it on, Install “Integration Services” and reboot when prompted:

10) Assign the original IP address(es) to your new network card(s)

11) Check device manager

12) Do another reboot

13) Check that all your applications and services are running

14) Done!

vmware-vs-microsoft

Note: if you have Win2008 VM’s then it’s not necessary to add a temporary IDE disk during migration but you might want to copy the relevant KB949219 (http://support.microsoft.com/kb/949219) update package to your VM before converting it. Otherwise it will start up with three warnings in the Device Manager for “Microsoft VMBus Video Device”, “Microsoft VMBus HID Miniport” and “Microsoft VMBus Network Adapter” – hence you will have no network access. I worked around it by “burning” the KB949219 updates to an ISO file using “ISO recorder“ (http://isorecorder.alexfeinman.com) and mounting the ISO file to my VM.

Pushing the Limits of Windows: Paged and Nonpaged Pool

In previous Pushing the Limits posts, I described the two most basic system resources, physical memory and virtual memory . This time I’m going to describe two fundamental kernel resources, paged pool and nonpaged pool, that are based on those, and that are directly responsible for many other system resource limits including the maximum number of processes, synchronization objects, and handles.

Paged and nonpaged pools serve as the memory resources that the operating system and device drivers use to store their data structures. The pool manager operates in kernel mode, using regions of the system’s virtual address space (described in the Pushing the Limits post on virtual memory) for the memory it sub-allocates. The kernel’s pool manager operates similarly to the C-runtime and Windows heap managers that execute within user-mode processes.  Because the minimum virtual memory allocation size is a multiple of the system page size (4KB on x86 and x64), these subsidiary memory managers carve up larger allocations into smaller ones so that memory isn’t wasted.

For example, if an application wants a 512-byte buffer to store some data, a heap manager takes one of the regions it has allocated and notes that the first 512-bytes are in use, returning a pointer to that memory and putting the remaining memory on a list it uses to track free heap regions. The heap manager satisfies subsequent allocations using memory from the free region, which begins just past the 512-byte region that is allocated.

Nonpaged Pool

The kernel and device drivers use nonpaged pool to store data that might be accessed when the system can’t handle page faults. The kernel enters such a state when it executes interrupt service routines (ISRs) and deferred procedure calls (DPCs), which are functions related to hardware interrupts. Page faults are also illegal when the kernel or a device driver acquires a spin lock, which, because they are the only type of lock that can be used within ISRs and DPCs, must be used to protect data structures that are accessed from within ISRs or DPCs and either other ISRs or DPCs or code executing on kernel threads. Failure by a driver to honor these rules results in the most common crash code, IRQL_NOT_LESS_OR_EQUAL .

Nonpaged pool is therefore always kept present in physical memory and nonpaged pool virtual memory is assigned physical memory. Common system data structures stored in nonpaged pool include the kernel and objects that represent processes and threads, synchronization objects like mutexes, semaphores and events, references to files, which are represented as file objects, and I/O request packets (IRPs), which represent I/O operations.

Paged Pool

Paged pool, on the other hand, gets its name from the fact that Windows can write the data it stores to the paging file, allowing the physical memory it occupies to be repurposed. Just as for user-mode virtual memory, when a driver or the system references paged pool memory that’s in the paging file, an operation called a page fault occurs, and the memory manager reads the data back into physical memory. The largest consumer of paged pool, at least on Windows Vista and later, is typically the Registry, since references to registry keys and other registry data structures are stored in paged pool. The data structures that represent memory mapped files, called sections internally, are also stored in paged pool.

Device drivers use the ExAllocatePoolWithTag API to allocate nonpaged and paged pool, specifying the type of pool desired as one of the parameters. Another parameter is a 4-byte Tag , which drivers are supposed to use to uniquely identify the memory they allocate, and that can be a useful key for tracking down drivers that leak pool, as I’ll show later.

Continue reading