Zfs Windows



ZFS is a UNIX file system that can be used in addition to the HFS file system. It contains files and directories that can be accessed with APIs. They can also be mounted into the z/OS® UNIX hierarchy along with other local or remote file systems types such as HFS, TFS, and NFS. ZFS on Linux and ZFS on Freenas both support feature flags, the problem has been that various new flags have rolled out from different vendors at different times and it’s taken a long time to reach parity between ZoL, Illumos and FreeBSD and it’s vary easy to create a pool on one system that can’t be imported on a system that doesn’t yet support the flags used by the pool.

  1. Zfs Windows Server
  2. Zfs For Windows
  3. Zfs Windows Server
  4. Zfs Windows Acl

It’s time for the big showdown! In this post I continue with my benchmark frenzy and after messing around with ZFS I put those SSDs to the test with FreeNAS. The most interesting part will be, of course, to see how those stand against Storage Spaces. On FreeNAS (FreeBSD) there’s no ATTO Disk Benchmark, obviously, but there’s Iozone. It’s not quite the same because it generates various request lenghts for each chunck size, so I had no better idea than to just average those values. If you think this needs to be improved, just let me know in the comments section. For your reference, I’ve also made the raw numbers available here.

Based on my (and others’) previous benchmarks, we already know all too well that while RAID10 performance is pretty decent with Storage Spaces, but parity schemes just suck ass. Let’s see ZFS’ take on the topic.

Windows

Nothing too fancy with RAID10 reads. ZFS is maybe a bit more balanced, but that’s it.

Same goes for writes, totally predictable performance. Now it’s time to add a twist to it.

Yeah, with single parity it actually looks like something usable compared to Storage Spaces, but trust me, it’ll only get better.

Quite ridiculous, isn’t it? That’s how we do things in downtown. There’s simply no excuse for Microsoft with this. Storage Spaces is absolutely worthless compared to this. It’s simply humiliated by RAID-Z. Let’s see what’s the deal with double parity.

You may remember that for whatever reason SS RAID6 read performance was considerable better than that of RAID5 and in this case it shows. Overall, it’s about the same throughput as with RAID-Z2. Of course, reads are the least of our problems so check out writes again.

Zfs

Zfs Windows Server

Again, Storage Spaces is annihilated. Would anyone actually want to use it for… anything?

I just wanna show you how incredibly balanced ZFS RAID performance is. This chart also includes RAID-Z3, which has no Storage Spaces equivalent (it’s basically triple parity) so it’s missing from the previous comparisons.

Zfs For Windows

See? I rest my case, this is the most consistent performance ever.

Zfs Windows Server

Same goes for writes. In fact, the results are almost too good to believe. Some people on #freenas even suggested that I’m limited by one or more of the buses, but it’s hard to believe given that I connect to the 24 SSDs via 24 SAS ports, those are split between two SAS3 HBA cards, and those 2 cards both go to a PCI-E 3.0 x8 slot. I simply don’t see a bottleneck here.

After seeing these numbers I can only repeat myself: there’s absolutely no excuse for Microsoft. I expressed in my previous post that the traditional parity levels are basically broken. I do believe that Microsoft should seriously consider the idea of incorporating the architecture of ZFS (if not itself) into a future release of Windows Server Storage Spaces. I know, at this point this sounds like blasphemy, but seriously, why not? Of course, you can always reinvent the wheel, but it’d make much more sense to join forces with the existing OpenZFS folks and help each other along the ride. I’m sorry to say, but until you do something along these lines, Storage Spaces will not be a worthy alternative.

Zfs Windows Acl

Dear Reader! If you have a minute to spare, please cast your vote on Uservoice about this idea:

Server

Thanks a lot, fingers crossed!