clone - VMWare EXSi snaphot cloning bug

07
2014-07
  • user2838376

    I've been using EXSi to deploy multiple servers. Once I've decided to clone one system to make life a little easier following many online guides, which suggests copying VM files to a different folder within a datastore.

    However, two problems showed up:

    1) copying within datastore tend to last forever, despite the fact that files only about 250Gb total and I've got like 900Gb free, and taking snapshot took me just a minute or so (80Gb snap).

    2) For some reason copied snapshots have very strange size. For example snapshot vm-000001.vmdk on original system have size of 4 523 008.00 KB with ProvisionedSize of 157 286 400.00 KB, while copied file was over 5 800 000.00 KB in size, according to datastore browser.

    Could someone help me solve this mysteries? Why clone snapshots were growing bigger then original ones and why this copying process was so slow?

    PS. Copying was on turned off VM; ESXi ver. 5.5.0; original vm disk (vm.vmdk) with size over 150Gb copied ok, or at least it seems so (Size in both folders was equal).

  • Answers
    Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.

    Related Question

    esxi - Building a VMWare box for development use
  • Gary

    I'm considering building a machine for the following purposes:

    1) Run development environments.
    2) Run DB servers to support the dev environments.

    Currently I'd be running the following environments:
    Windows 2k8 -- VS2010
    Windows 2k3 -- Sql Server
    Linux probably 2 separate builds.
    Media server of some flavor.

    The plan is to install VMWare ESXi on a SSD drive and have 3-5x HD's

    Also, as time goes on I'll likely add additional machines... I can think of a couple right off hand... So likely 5 VM's running all the time with the potential of 2 or 3 more depending on what I'm working on...

    I've noticed that VMWare workstation / Virtual box slows right now when running multiple virtual machines...

    Currently I'm trying to decide between the 2 following specs: Server grade mobo with dual xeon's (quad core) and about 16gigs of ram. High end desktop PC i7 950ish with about 12 gigs of ram...

    Does anyone know what sort of performance I'd get out of the i7, my gut feeling is that the dual CPU Xeon is more expensive up front but I'd get allot more life out of it... (Move to 8 core cpu's 2 or so years from now) allot more RAM potential...

    I believe i can use ECC or non ECC ram on the server motherboards... I don't have the option of ECC with the i7 mobo...

    Does anyone know if it's possible to plug I7's into Xeon Mobo's? Their both LGA1366 so I'm somewhat confused as to why their not listed as compatible...

    Any input greatly appreciated.


  • Related Answers
  • Diffuser

    Is this for commercial use or your own little playground to work with? Typical usage for the VMs is what is the main deciding factor in how much CPU and RAM you need to throw at it.

    The biggest user of resources that you listed will probably be the SQL server depending on how many queries will be run against it and what its used for. DB servers usually have larger memory footprints, eat up lots of CPU as well as take as much hard drive bandwidth as possible. The media server might also be pretty intensive depending on if it will be doing any transcoding work.

    Obviously if a VM is for commercial applications that needs 100% CPU resources where turnaround time matters you'd want at least one dedicated physical core for that VM and any others like it. If it's for your own personal use and you don't need 24/7 availability with high usage scenarios you have a bit more freedom. You probably wouldn't see much performance degradation with the i7 and going to even 10 VMs if they are not all pegged at 100% CPU usage at the same time, but it becomes harder to configure the RAM allowances for each of those VMs on a limited platform.

    The amount of RAM you need is completely dependent on how you configure the VMs and what is running on them. If you have any of these servers running right now, run some diagnostics on how much those memory they currently use and see what the usage peaks at and base your VM allowances on that. For total RAM amounts on the host machine, know that after about 80% total RAM utilization ESX will begin using the disk caching which is always far slower than RAM (just like virtual memory in the OS) even if you are hosting that on an SSD. So for example try looking at the i7 platform with a theoretical 8 VMs, do you think you can you safely fit those all within about 10GB of memory, leaving the other 2GB free so you aren't forcing ESX to use cache?

    Also, if the VMs you are hosting are memory intensive you should definitely go for more RAM over a faster CPU or faster RAM as you will see a greater benefit. If RAM usage spikes and you run out of memory your performance will tank across the board no matter how fast your host CPU is or what your RAM is clocked at. Again as you said the Xeon has lots of headroom for RAM expansion and while the i7 box has enough for 5 VMs now, how many would you be adding in the future and with what sort of memory allocation?

    Of course another alternative if the Xeon platform is too expensive is to simply get one i7 box now and then get a second one at a later time if you need it. If space isn't an issue where you are putting these ESX boxes this may very well be the best bang for your buck solution.

    As for installing the i7s in a Xeon mobo it will only work if the Xeon board is single socket as far as I know. i7s are only allowed to run in single socket mode, so it might work if you only put 1 in a dually Xeon board, but you'd lose half of the RAM slots and other features so it'd be kind of worthless.

  • paradroid

    I run up to six or seven VMs running at the same time, using an HP ProLiant pedestal server running ESXi, with 8GiB of ECC RAM, and it does not slow down (although most of the VMs would be idling most of the time).

    My server runs on a three year old quad-core Opteron. Having set up a few VMware systems for businesses, I really do not think you would need a dual CPU system for personal use, as that would only be needed with at least a half dozen concurrent users.

    I'd recommend you use ECC RAM, so you'd need a Xeon for that. If you really think that you need two processors, for future upgrade, you could get a dual socket motherboard and only fit one Xeon. You can also get the HP ProLiant ML350 G6 in this configuration.

    When buying hardware for ESXi, you need to check the Hardware Compatibility List, as it does not run on just any hardware (Hyper-V is a lot better for that). I found that I had to buy another RAID controller, as the one I had was not compatible with ESXi.

    Another thing: using an SSD for ESXi seems like a big waste to me. It can run perfectly fine from a USB flash drive. My ProLiant even has a USB port on the motherboard especially for this. The newer G6 and G7 generations have ESXi already embedded on SD cards. Doing this leaves your hard disks purely for your virtual machines and storage vDisks.