Our Blog


What is VMFS6?

VMFS6 is the latest VMware File System version that introduced with the release of vSphere 6.5 and few enhancements have been introduced while comparing to VMFS5 that you all are already aware of. In this post, I’ll try to explain what’s new is introduced in VMFS6, and will highlight some major differences between the two and here is some additional info you should also know when you’re planning upgrade to vSphere 6.5 or dealing with a mixed vSphere versions virtual environment.

The first thing is; you can’t upgrade existing VMFS5 volumes to VMFS6. For this, you’ll have to create new VMFS6 datastores, then you can migrate your workload from VMFS5 to VMFS6 by using Storage vMotion. When migration process completed successfully, delete the old VMFS5 datastores to get back your disk space. It’s a long and painful process and depending upon your virtual environment that how much space you have.

What’s New in VMFS 6 Datastore?

Following are key enhancements which came with VMFS6 datastores:

Support for 4K Native Drives in 512e mode: As sizes of spinning disks are growing day by day with the introduction of new technologies, the new format allows 4kB sectors instead of the common 512 bytes per sector.

SE Sparse disk is now default: SE Sparse disks provide improved space efficiency to VDI infrastructure deployed on this virtual disk format because they have the ability to reclaim stranded space from within the guest OS automatically.

Automatic Space Reclamation: Automatic Space Reclamation allows vSphere to send UNMAP commands to a storage array to reclaim dead or stranded space on thinly-provisioned VMFS volumes. In vSphere 6.0, it is done via the command line interface. vSphere 6.5 offers automation of process by tracking the deleted VMFS blocks and reclaiming deleted space in the background from the storage array after every 12 hours with minimal effect on storage I/O.

UNMAP works with both array level as well as guest OS level with newer versions of Windows and Linux.

Supports 512 devices and 2000 paths: This feature allows to place large capacity drives on a vSAN. 512e has physical sector sizes of 4 kB. In previous releases, it had a limit of 256 devices and 1024 paths while vSphere 6.5 covers this to 2000 paths.

View Storage Accelerator (Content Based Read Cache (CBRC)): The View Storage Accelerator reduces storage loads generated by peak VDI storage reads by using the Content Based Read Cache (CBRC) to store common blocks of desktop images in local host memory. This significantly improves the desktop performance, especially during boot storms or anti-virus scanning storms when a large number of blocks with identical contents are read.

Comparison Between VMFS5 & VMFS6

Following table shows the comparison of features and supports between VMFS5 and VMFS6.

Features and Functionalities VMFS5 VMFS6
Access for ESXi hosts version 6.5 and later Yes Yes
Access for ESXi hosts version 6.0 and earlier Yes No
Datastores per host 512 512
512n storage devices Yes Yes (default)
512n storage devices Yes. Not supported on local 512e devices. Yes (default)
4Kn storage devices No Yes
Automatic space reclamation No Yes
Manual space reclamation through the esxcli command. See Manually Reclaim Accumulated Storage Space. Yes Yes
Space reclamation from guest OS Limited Yes
GPT storage device partitioning Yes Yes
MBR storage device partitioning Yes For a VMFS5 datastore that has been previously upgraded from VMFS3. No
Storage devices greater than 2 TB for each VMFS extent Yes Yes
Support for virtual machines with large capacity virtual disks, or disks greater than 2 TB Yes Yes
Support of small files of 1 KB Yes Yes
Default use of ATS-only locking mechanisms on storage devices that support ATS. See VMFS Locking Mechanisms. Yes Yes
Block size Standard 1 MB Standard 1 MB
Default snapshots VMFSsparse for virtual disks smaller than 2 TB. SEsparse for virtual disks larger than 2 TB. SE Sparse
Virtual disk emulation type 512n 512n
vMotion Yes Yes
Storage vMotion across different datastore types Yes Yes
High Availability and Fault Tolerance Yes Yes
DRS and Storage DRS Yes Yes
RDM Yes Yes

I hope you have enjoyed reading this post. Thanks for reading! Be social and share it to social media if you feel worth sharing it.