Best products from r/zfs
We found 30 comments on r/zfs discussing the most recommended products. We ran sentiment analysis on each of these comments to determine how redditors feel about different products. We found 51 products and ranked them based on the amount of positive reactions they received. Here are the top 20.
1. Intel Optane SSD 800P Series (58GB, M.2 80mm PCIe 3.0, 3D XPoint) - SSDPEK1W060GAXT
- 3D XPoint memory media
- Sustained Sequential Read/Write: up to 1400 / 600 MB/s
- 4KB Random Read/Write, Queue Depth 4: up to 340K / 140K IOPs
- PCIe 3.0 X2, NVMe interface
- 5-year warranty
Features:
2. Syba 8 Port SATA III Non-RAID PCI-e x4 Controller Card Supports FreeNAS and ZFS RAID - Includes Mini SAS to SATA Breack Out Cables (SI-PEX40137)
- Using a ASM1806 PCIe bridge elimate the need for a Port Multiper when combining the Dual Marvell 9215 Chipset allowing to reach maximum bandwidth per port.
- Compliant with PCI-Ecpress Specification v2.0 and backwards compatible with PCI-E 1.x
- Supports Communication Speeds: 6.0 Gbps, 3.0 Gbps, 1.5 Gbps
- Support Native Command Queue (NCQ). Support Hot plug and Hot Swap.
- Supports AHCI 1.0 programming interface registers for the SATA controller
Features:
3. Rosewill 4U Server Chassis/Server Case/Rackmount Case, Metal Rack Mount Computer Case Support with 15 Bays & 7 Fans Pre-Installed (RSV-L4500)
Superb Scalability: Support up to 15 internal 3.5" HDDs and seven expansion slots, so users can expand your server system easily.Unmatched Cooling: 2 x 80mm rear fans, 3 x 120mm front fans and 3 x 120mm middle fans, total 8 cooling fans deliver exceptional thermal performance you can rely on.Front D...
4. Inateck SSD Mounting Bracket 2.5 to 3.5 with SATA Cable and Power Splitter Cable, ST1002S
- Support 2.5" hard drives (SSD and HDD) and fits all popular PC casings
- Fit for two 2.5" drives providing valuable space for additional PC components
- Realizing fast and interference-free data and power transmissions
- The top speeds up to 6 Gbps. Downwards compatible with SATA I & II
- Accessories: 1 x Mounting Kit, 1 x Screw Driver, All Screws Needed, 1x 4 Pin to Dual 15 Pin SATA Power Cable(16 cm), 1x 15 Pin to Dual 15 Pin SATA Power Cable(16 cm), 2x SATA Data Cables(48 cm)
Features:
5. SilverStone Technology Premium Mini-Itx/DTX Small Form Factor NAS Computer Case, Black DS380B-USA Newest Version (SST-DS380B-USA)
- Support 12 total drives with 8 hot-swappable 3.5" Or 2.5" Sas/SATA and 4 fixed 2.5" Drives
- Unbelievable storage space and versatility for small form Factor
- Premium brushed aluminum front door
- Support Graphics card up to 11" With supporter design from tj08-e
- Lockable power button design and adjustable LED from GD07
- Includes three 120mm fans with filtered intake vents
Features:
6. ASUS Hyper M.2 x16 Card Expansion NV Me M.2 Drives and Speed up to 128Gbps Components
- Intel VROC ready with Intel Reset for X299 Series Motherboard models with Sky lake-X processors
- Supports four (4) additional NV Me M.2 drives using Intel VROC (Virtual RAID on CPU) for transfer speeds up to 128Gbps
- PCI Express 3.0 x16 interface, compatible with PCI Express x8 and x16 slots
- Integrated blower style fan for optimized cooling of M.2 drives
- To enable VROC, a H/W standard or premium key to unlock features
Features:
7. Cable Matters Internal Mini SAS to Mini SAS Cable (Mini-SAS to Mini-SAS Cable) 3.3 Feet, 1m
MULTILANE MINI SAS DATA CABLE directly connects a RAID or PCI Express controller to the SAS backplane of a hard drive bay in a server or workstation; Internal, multilane cable with Mini SAS 8087 connectors is used for mass storage interconnection between a SAS controller and a SAS/SATA drive enclosu...
8. 3WARE Cable Multi-lane Internal Cable (SFF-8087)
Length is 0.5mConnects the controller`s SFF-8087 Multi-lane connector(s) to the drives` or backplane`s discrete SATA connector(s)It combines the RAID controller’s multiple SAS/SATA ports into single locked connections.Model -- CBL-SFF8087OCF-05MType -- Cable InternalDescripiton -- Connects the con...
9. Mediasonic ProBox HF2-SU3S2 4 Bay 3.5” SATA HDD Enclosure – USB 3.0 & eSATA Support SATA 3 6.0Gbps HDD transfer speed
- Support all brand of 3.5" SATA I / II / III hard disk drive up to 14TB per drive, and up to 4 x 14TB
- Support SATA 3 6.0Gbps hard drive transfer rate.
- Transfer rate up to 5.0Gbps via USB 3.0, Transfer rate up to 6.0Gbps via eSATA
- Support 2.5” SATA SSD / HDD (Bracket Adapter required, not included in the package, sold separately)
- Thermal Sensor Built-in, Auto and Manual mode, and ONE Button interface selection to switch USB 3.0 or eSATA
Features:
10. AeroPress Coffee and Espresso Maker - Quickly Makes Delicious Coffee Without Bitterness - 1 to 3 Cups Per Pressing
- Popular with coffee enthusiasts worldwide, the patented AeroPress Original is a new kind of coffee press that uses a rapid, total immersion brewing process to make smooth, delicious, full flavored coffee without bitterness and with low acidity.
- Good-bye French Press! The rapid brewing AeroPress Original avoids the bitterness and high acidity created by the long steep time required by the French press. Plus, the AeroPress paper Micro-filter eliminates grit and means clean up takes seconds.
- Versatile: Easily makes 1 to 3 cups of American coffee per pressing in about a minute. Unlike a French press, it can also make cold brew (in just two minutes!) or espresso style coffee for use in lattes, cappuccinos and other espresso based drinks.
- Perfect for home kitchen use, the AeroPress Original is lightweight, compact, portable and durable, making it also ideal for traveling, camping, backpacking, boating and more!
- Includes the AeroPress press, funnel, scoop, stirrer, 350 paper mMicro-filters and a filter holder. Phthalate free and BPA free. Mug not included. Assembled measurements: 9 1/2" h X 4" w X 4" d
Features:
11. Seagate Archive HDD 8TB SATA 6GBps 128MB Cache SATA Hard Drive (ST8000AS0002)
For Archive use onlyReliable, low-power data retrieval based on SMR technologyEnjoy peace of mind with a drive engineered for 24x7 workloads of 180TB/yearKeep your costs down with up to 1.33TB-per-disk hard drive technologyStore your data faster with SATA 6Gb/s interface that optimizes burst perform...
12. IcyBox IB-3640SU3 External 4 Bay JBOD System for 3.5" SATA HDD USB 3.0
ICY BOX-20640
13. Qlogic Qle4060C-Ck Qlogic 1Gb-Pcie-Iscsi Single Port Hba
Type: Fibre Channel Host Bus AdapterHost Interface: PCI Express
14. StarTech.com 2 Port SATA 6 Gbps PCI Express eSATA Controller Card - Storage Controller - 2 Channel - eSATA 6Gb/s - 6 Gbit/s - PCIe - PEXESAT32
Add Two eSATA 3.0 (6Gbps) Ports for High Speed Access to Large External Storage SolutionsPCI Express eSATASATA CardSATA 6 Gbps ControllerPCI-e Dual eSATA2 Port SATA 6 Gbps PCI Express eSATA Controller CardIncludes Full and Low Profile Bracket
15. IO Crest 16 Port SATA III PCIe 2.0 x2 Controller Card Green, SI-PEX40097
Chipset: Marvell 9215Compliant with PCI-Express Specification V2.0 and backwards compatible with PCI-E 1.XSupports AHCI 1.0 programming interface registers for the SATA controllerSupports message Signaled Interrupt (MSI)Supports port Multiplier FIS based switching or Command based switching
16. AFUNTA 3FT eSATA to SATA male to male M/M Shielded Extender Extension HDD Cable 6Gbps
- USE- Connect the RCA Stereo connections out of a device such as a DVD Player, TV, etc. into the RCA Stereo connection inputs on an A/V receiver or other RCA compatible audio device or speaker
- ULTRA-FLEXIBLE PVC JACKET – Allows for easy installation in tight spaces behind your desk, entertainment center or audio rack without causing damage to the conductors
- MOLDED GOLD-PLATED CONNECTORS – Corrosion resistant connectors provide optimal audio signal transfer and a durable, long life. Molded jacket will provide excellent strain relief against conductor damage
- FOIL SHIELDED AND OXYGEN-FREE COPPER CONDUCTORS – 100% Foil shielding will protect the audio quality from outside interference. Made with high-grade copper conductors to ensure high quality audio transmissions over the long life of this cable
- PACKAGING – To ensure you receive the highest quality products, C2G packages all product in C2G branded packaging
Features:
17. CableCreation Mini SAS 36Pin (SFF-8088) Male to 4 SATA 7Pin Female Cable, Mini SAS Host/Controller to 4 SATA Target/Backplane, 1 Meter
External Mini SAS 26Pin (SFF-8088) to 4x 7Pin Sata female Cable,It has an external 26-pin SFF-8088 male Mini-SAS plug (with release ring) on one end and 4x 7Pin Sata on the other.Serial Attached SCSI (SAS) is a high-speed data storage interface designed for high-throughput and fast data access. Inte...
18. LSI Logic SAS9200-8E 8PORT Ext 6GB Sata+SAS Pcie 2.0
- Lsi Logic 9200-8e 8-ports Sas Controller - Pci Express X8 - Plug-in Card - 2 Sas Port(s) - Rohs Compliance
Features:
19. (2 Pack) Internal 36 Pin Mini SAS HD SFF-8087 Male to SFF-8087 Mini SAS Cable 2.0 Feet / 0.6 Meter
- Connectors: SFF-8087 Male (internal Mini SAS, 36-pin ) to SFF-8087 Male (internal Mini SAS, 36-pin )
- Cable Lenght : 2 Feet / 0.6 Meter
- Conductors / Drain Wire : Solid Tinned Copper, 30 AWG
- Shielding: 0.001" Aluminized Polyester, Foil in, 20% minimum overlap, white; clear Polyester, heat sealed
- Data transfer rate: 6Gb/s
Features:
20. Lycom DT-120 M.2 PCIe to PCIe 3.0 x4 Adapter (Support M.2 PCIe 2280, 2260, 2242)
- Lycom DT-120 M.2 NGFF PCIe based SSD works in main board PCIe x4 bus slot
- PCI Express 3.0 x4 Lane Host adapter
- Supports PCIe Gen3 and PCIe Gen2 M.2 NGFF 80mm, 60mm, 42mm SSD
- Supports PCIe 1.0 ,PCIe 2.0 and PCIe 3.0 motherboard
- Note: this adapter is only for 'M' key M.2 PCIe SSD such as Samsung XP941 SSD. Not compatible with a 'B' key M.2 PCIe x2 SSD or 'B' key M.2 SATA SSD.
Features:
Linking OP's problem here...
Chances are 9/10 that the CPU is not "busy", but instead bumping up against a mutex lock. Welcome to the world of high-performance ZFS, where pushing forward the state-of-the-art is often a game of mutex whac-a-mole!
Here's the relevant CPU note from the post:
> did a perf top and it shows most of the kernel time spent in _raw_spin_unlock_irqrestore in z_wr_int_4 and osq_lock in z_wr_iss.
Seeing "lock" in the name of any kernel process is often a helpful clue. So let's do some research: what is "z_wr_iss"? What is "osq_lock"?
I decided to pull down the OpenZFS source code and learn by searching/reading. Lots more reading than I can outline here.
txgsync: ~/devel$ git clone https://github.com/openzfs/openzfs.git
txgsync: ~/devel$ cd openzfs/
txgsync: ~/devel/openzfs$ grep -ri z_wr_iss
txgsync: ~/devel/openzfs$ grep -ri osq_lock
Well, that was a bust. It's not in the upstream OpenZFS code. What about the zfsonlinux code?
txgsync: ~/devel$ git clone https://github.com/zfsonlinux/zfs.git
txgsync: ~/devel$ cd zfs
txgsync: ~/devel/zfs$ grep -ri z_wr_iss
txgsync: ~/devel/zfs$ grep -ri osq_lock
Still no joy. OK, time for the big search: is it in the Linux kernel source code?
txgsync: ~/devel$ cd linux-4.4-rc8/
txgsync: ~/devel/linux-4.4-rc8$ grep -ri osq_lock
Time for a cup of coffee; even on a pair of fast, read-optimized SSDs, digging through millions of lines of code with "grep" takes several minutes.
include/linux/osq_lock.h:#ifndef LINUX_OSQ_LOCK_H
include/linux/osq_lock.h:#define LINUX_OSQ_LOCK_H
include/linux/osq_lock.h:#define OSQ_LOCK_UNLOCKED { ATOMIC_INIT(OSQ_UNLOCKED_VAL) }
include/linux/osq_lock.h:static inline void osq_lock_init(struct optimistic_spin_queue lock)
include/linux/osq_lock.h:extern bool osq_lock(struct optimistic_spin_queue lock);
include/linux/rwsem.h:#include <linux/osq_lock.h>
include/linux/rwsem.h:#define __RWSEM_OPT_INIT(lockname) , .osq = OSQ_LOCK_UNLOCKED, .owner = NULL
include/linux/mutex.h:#include <linux/osq_lock.h>
kernel/locking/Makefile:obj-$(CONFIG_LOCK_SPIN_ON_OWNER) += osq_lock.o
kernel/locking/rwsem-xadd.c:#include <linux/osq_lock.h>
kernel/locking/rwsem-xadd.c: osq_lock_init(&sem->osq);
kernel/locking/rwsem-xadd.c: if (!osq_lock(&sem->osq))
kernel/locking/mutex.c:#include <linux/osq_lock.h>
kernel/locking/mutex.c: osq_lock_init(&lock->osq);
kernel/locking/mutex.c: if (!osq_lock(&lock->osq))
kernel/locking/osq_lock.c:#include <linux/osq_lock.h>
kernel/locking/osq_lock.c:bool osq_lock(struct optimistic_spin_queue lock)
For those who don't read C well -- and I number myself among that distinguished group! -- here's a super-quick primer: if you see a file with ".h" at the end of the name, that's a "Header" file. Basically, it defines variables that are used elsewhere in the code. It's really useful to look at headers, because often they have helpful comments to tell you what the purpose of the variable is. If you see a file with ".c" at the end, that's the code that does the work rather than just defining stuff.
It's z_wr_iss that's driving the mutex lock; there's a good chance I can ignore the locking code itself (which is probably fine; at least I hope it is, because ZFS on Linux is probably easier to push through a fix than core kernel IO locking semantics) if I can figure out why we're competing over the lock (which is the actual problem). Back to grep...
txgsync: ~/devel/linux-4.4-rc8$ grep -ri z_wr_iss
MOAR COFFEE! This takes forever. Next hobby project: grok up my source code trees in ~devel; grep takes way too long.
...
...
And the search came up empty. Hmm. Maybe _iss is a structure that's created only when it's running, and doesn't actually exist in the code? I probably should understand what I'm pecking at a little better. Let's go back to the ZFS On Linux code:
mbarnson@txgsync: ~/devel/zfs$ grep -r z_wr
module/zfs/zio.c: "z_null", "z_rd", "z_wr", "z_fr", "z_cl", "z_ioctl"
Another clue! We've figured out the Linux Kernel name of the mutex we're stuck on, and that z_wr is a structure in "zio.c". Now this code looks pretty familiar to me. Let's go dive into the ZFS On Linux code and see why z_wr might be hung up on a mutex lock of type "_iss".
txgsync: ~/devel/zfs$ cd module/zfs/
txgsync: ~/devel/zfs/module/zfs$ vi zio.c
z_wr is a type of IO descriptor:
/
const char zio_type_name[ZIO_TYPES] = {
"z_null", "z_rd", "z_wr", "z_fr", "z_cl", "z_ioctl"
};
What about that z_wr_iss thing? And competition with z_wr_int_4? I've gotta leave that unanswered for now, because it's Saturday and I have a lawn to mow.
It seems there are a few obvious -- if tentative -- conclusions:
It's just a hypothesis, but I think it may have some legs and needs to be ruled out before other causes can be ruled in.
I was willing to dive into this a bit because I'm in the midst of some similar tests myself, and am also puzzled why the IO performance of Solaris zones so far out-strips ZFSoL under Xen; even after reading Brendan Gregg's explanation of Zones vs. KVM vs. Xen I obviously don't quite "get it" yet. I probably need to spend more time with my hands in the guts of things to know what I'm talking about.
TL;DR: You're probably tripping over a Linux kernel mutex lock that is waiting on a Xen ring buffer polling cycle; this might not have much to do with ZFS per se. Debugging Xen I/O scheduling is hard. Please file a bug.
ADDENDUM: The Oracle Cloud storage is mostly on the ZFS Storage Appliances. Why not buy a big IaaS instance from Oracle instead and know that it's ZFS under the hood at the base of the stack? The storage back-end systems have 1.5TB RAM, abundant L2ARC, huge & fast SSD SLOG, and lots of 10K drives as the backing store. We've carefully engineered our storage back-ends for huge IOPS. We're doubling-down on that approach with Solaris Zones and Docker in the Cloud with Oracle OpenStack for Solaris and Linux this year, and actively disrupting ourselves to make your life better. I administer the architecture & performance of this storage for a living, so if you're not happy with performance in the Oracle Cloud, your problem is right in my wheelhouse.
Disclaimer: I'm an Oracle employee. My opinions do not necessarily reflect those of Oracle or its affiliates.
Good old, nothing special Seagate [ST8000AS0002] (https://www.amazon.com/gp/product/B00XS423SC/). I've only had the pool online for a couple months at this point, so I can't comment on reliability... but so far I'm happy.
As long as you know what you're getting into with SMR disks, I think you can live w/ them fairly well. I am glad I have a normal pool too though. All my stuff lands there first and then moves to the SMR pool.
Some things worth mentioning... the disks are 5900 rpm, so they're not going to be great at random io. They do have ~25G of PMR area on each disk, so if your work load isn't entirely ideal for SMR disks... it can still work. A copy on write filesystem seems particularly well paired with SMR disks since they don't need to re-shingle to modify a file. They do streaming, linear writes very well to the SMR portion of the disk, I think.
I wouldn't want them to be my only disks in my NAS, but they make a good write once, read many pool.
The Tyan S7012 are a good build. I make a few ZFS file servers out of them every year, but few notes.
-They originally came with series 5500 intel support only. To get 5600 support, you will have to upgrade the firmware. So I recommend you buy a pair of L5520 CPUs when you get the board, they are super cheap quad core CPUs that a pair sells for around $5 now.
-The South bridge gets hot, some boards come with high profile heat sinks, but not most of them. If your build is not in a high air flow case consider placing a 40mm fan on the heat sink. This will help with system stability.
-It comes with only 5 PCIe x8 open ended slots, (some are only x4) It is nice since you can place larger cards into the slots (all PCIe should be open ended) but be careful in slot one, the rear components may stop you from placing a large card in them.
>passmark of 26,104
No wonder you need water cooling, a 150W CPU needs that. Over all nice. Far more than I would spend, but I'm a bit on the cheap side.
I Purchased https://www.amazon.com/Asus-Hyper-M-2-x16-Card/dp/B0753JTJTG, for the heck of it. If it does not suit my needs I figured I can return or resell it and get most of my money back. My main issue is going to be installing it. Both my PCIe 3.0 x16 slots are full of video cards, and my only other option is the x4 slot that my NVMe card is currently sitting in. I am guessing a I may end up upgrading my board soon or I will most likely pull out my sound card and end up installing a PCI one instead. I have a few LGA 2011 boards, but I just spent a week building my current setup and I don't want to rip it apart. -_-
I did a bit of searching on your behalf and obviously I haven't tested it, (so please don't hold me responsible) but this looks like 99% the same thing as the Probox:
https://www.amazon.co.uk/RaidSonic-ICY-BOX-IB-3640SU3-drive/dp/B009DH5Q2S/ref=sr_1_35?ie=UTF8&amp;qid=1504622540&amp;sr=8-35&amp;keywords=4+bay+esata
RaidSonic ICY BOX IB-3640SU3 - hard drive array
The Icybox External 4-bay JBOD enclosure for 4x 3.5” SATA l/ll/lll HDDs easy assembling by tray less design, HDD capacity unlimited, supports: Windows XP/Vista/Win7, MAC OS X. Plug & Play and Hot Swap. JBOD (Just a Bunch of Discs) JBOD, USB 3.0, eSATA
The reviews aren't too bad either from what I saw, so please let us know if you get one and it works well for you. :)
So after a few hours of benchmarking these are a few questions
cat /sys/block/sda/queue/hw_sector_size
512
zfs set atime=off rpool
Thank you
Best thing to do is to buy a new case. Either this https://www.amazon.com/SilverStone-Technology-Mini-Itx-Computer-DS380B-USA/dp/B07PCH47Z2/ref=sr_1_15?keywords=silverstone+hotswap&qid=1566943919&s=gateway&sr=8-15 Which a quite a lot of folks I know who are using mini iTX are using something like this. 8 hotswap 3.5 and 4 x 2.5 https://www.silverstonetek.com/product.php?pid=452 or if you want to use ALL your drives and a cheaper alternative https://www.amazon.com/dp/B0091IZ1ZG/ref=twister_B079C7QGNY?_encoding=UTF8&th=1 You can fit 15 x 3.5 in that. or get some 2x2.5 to 1x3.5 to shove some SSDs in there too. https://www.amazon.com/Inateck-Internal-Mounting-Included-ST1002S/dp/B01FD8YJB4/ref=sr_1_11?keywords=2.5+x+3.5&qid=1566944571&s=electronics&sr=1-11 There are various companies I looked quickly on Amazon. That way you can have 12 drives rather than just 6. The cheap sata cards will fix you up or shove this in there https://www.amazon.com/Crest-Non-RAID-Controller-Supports-FreeNAS/dp/B07NFRXQHC/ref=sr_1_1?keywords=I%2FO+Crest+8+Port+SATA+III+Non-RAID+PCI-e+x4+Controller+Card+Supports+FreeNAS+and+ZFS+RAID&qid=1566944762&s=electronics&sr=1-1 . Hope this helps :)
Can you link me to a good example? Preferably one suited for a homelab, ie not ridicu-enterprise-priced to the max? This is something I'd like to play with.
edit: is something like this a good example? How is the initial configuration done - BIOS-style interface accessed at POST, or is a proprietary application needed in the OS itself to configure it, or...?
Current: (6-1) x 4 TB = 20 TB
New:
(3-1) x 6 TB = 12 TB
(3-1) x 4 TB = 8 TB
20 TB total
You don't gain any space by doing this, though you do prepare for the future.
Are you able to add more drives to your system, perhaps externally? I've personally used these Mediasonic 4-bay enclosures along with an eSATA controller (though the enclosures also support USB3). Get some black electrical tape though, because the blue lights on the enclosure are brighter than the sun. The only downside with port-splitter enclosures is that if one drive fails and knocks out the SATA bus, the other 3 drives will drop offline too. The infamous 3 TB Seagates did that, but I had other drives (both 3 TB WD and 2 TB Seagates) fail without interfering with the other drives. Nothing was permanently damaged; just had to remove the failed drive before the other 3 started working again. Also, the enclosure is not hot-swap; you have to power down to replace drives. But hey, it's $99 for 4 drive bays.
6 TB Red drives are $200 right now ($33/TB); 8 TB are $250 ($31/TB), and 10 TB are $279 ($28/TB).
Instead of spending $600 (three 6 TB drives) and getting nothing, spend $672 ($558 for two 10 TB drives, $100 for enclosure, $30 for controller, $4 for black electrical tape) and get +10 TB by adding a pair of 10 TB drives in a mirror in an enclosure, and have another 2 bays free for future expansion.
(6-1) x 4 TB = 20 TB
(2-1) x 10 TB = 10 TB
30 TB total, $668 for +10 TB
Later buy another two 10 TB drives and put them in the two empty slots:
(6-1) x 4 TB = 20 TB
(2-1) x 10 TB = 10 TB
(2-1) x 10 TB = 10 TB
40 TB total, $558 for +10 TB
Then in the future you only have to upgrade two drives at a time, and you can replace your smallest drives with the now-replaced drives.
You can repeat this with a second enclosure, of course. :)
Don't forget that some of your drives will fail outside of warranty, which can speed your replacement plans. If a 4 TB drive fails, go ahead and replace it with a 10 TB drive. You won't see any immediate effect, but you'll turn that 20 TB RAIDz1 into 50 TB that much quicker.
Oh, and make sure you've set your recordsize to save some space! For datasets where you're mainly storing large video files, set your recordsize to 1 MB: "zfs set recordsize=1M poolname/datasetname". This only takes effect on new writes, so you'd have to re-write your existing files to see any difference. You can rewrite files with "cp -a filename tmpfile; mv tmpfile filename" for all files, or a much easier way is just create a new dataset with the proper recordsize, move all files over, then delete the old dataset and rename the new dataset.
See this spreadsheet. With 6 disks in RAIDz1 and the default 128K record size (16 sectors on the chart) you're losing 20% to parity. With 1M record size (256 sectors on the chart) you're losing only 17% to parity. 3% for free!
https://www.reddit.com/r/zfs/comments/9pawl7/zfs_space_efficiency_and_performance_comparison/
https://www.reddit.com/r/zfs/comments/b931o0/zfs_recordsize_faq/
You know if there's a list of supported chipset on the ROCKPro64 PCI-e?
With something like this I could connect up to 16 SATA discs and run ZFS with it.
I currently have an old server: Tyan motherboard, intel core2duo, 8GB DDR2 ram, PCI-X (not express, the old PCI-X format), 10 3TB hard drives in RAID-Z2... It's massive, heat a lot for not so much perfs, and I'd like to replace it with a lighter config like a ROCKPro64...
Performance-wise, I only need gigabit speed, about 100MB/s top.
The card I got is "LSI LSI00244 (9201-16i) PCI-Express 2.0 x8 SATA / SAS Host Bus Adapter Card". I got it from NewEgg.
The backplane and the card both use standard (multi-lane SAS) SFF-8087 connectors. The backplane requires right-angle connectors due to the tight fit, so I got these cables: https://www.amazon.com/Internal-SFF-8087-Right-Angle-Cable/dp/B0769FWJJP/
Note that the backplane uses port multipliers to allow 12 drives to be hooked up to 8 SAS/SATA lanes using two SFF-8087 cables (each cable is four lanes). The only issue with this is that the order your system lists the drives in may not be the same as the physical order of the slots, but I have not found that to be an issue so far. It just means you should record the serial numbers of which drive is in which slot so that you can still do hot-swap replacements. It's really just a minor inconvenience.
Also, I have, and highly recommend getting, the extra two 2.5" rear drive bays. ZFS allows you to install SSDs to operate as cache drives, and those two slots are a great place for them. If using the Dell built-in connections, the rear bays hook up through the front backplane's port multiplier, but I ran a third SFF-8087 cable directly from the LSI card because there's no reason to waste the two extra ports on this four-port card.
My data drives are 10x 8TB Seagate Enterprise ST8000NM0055 in raidz6 configuration. I chose them from the BackBlaze quarterly Hard Drive Reliability data. I highly recommend looking there for the most up to date data on drives they use and failure rates. I have 2x 512GB Samsung 860 PRO SSDs in the rear slots, configured as L2ARC cache for ZFS. The OS (I'm using Ubuntu 18.04) is on 2x 1TB Western Digital Black in a raidz1, chosen just because they were cheap.
I am pretty happy with this setup, but there are two changes I plan to see if I can make. First, I am going to see if I can move the OS raidz1 onto two more SSDs, likely M.2 SSDs on PCIe adapters. Second, if I do that, is to put in two more 8TB drives and add them as hot spares so ZFS will automatically start rebuilding onto them if other drives fail.
Hi,
I had a power outage yesterday, and my UPS shut down at 75% battery remaining with no warning (I don't think I was writing to my ZFS array). I have a ZFS RAID 0 array with 3 WD Red 8TBs (yes, I understand it is 100% temporary storage and it WILL fail, two of the disks have 6k power on hours and the other has like 600, but I expect it to be good for at least a bit longer and not have a triple failure) with this device: https://www.amazon.com/Mediasonic-ProBox-HF2-SU3S2-SATA-Enclosure/dp/B003X26VV4/ and it failed. I destroyed the zpool and created it from scratch, copied all data over, and there was STILL a ton of checksum errors (not sure if I did zpool clear at any point, or if I needed to?)... does that mean all my drives are bad, or my enclosure is bad, or is it possible it was just a temporary issue? I turned that box off, rebooted my computer, turned it back on and created my zpool.
I am going to replace my UPS because I didn't trust it before and now I definitely don't, but I'm hoping my disks are okay? I don't have ANY SMART issues, offline_uncorrectable, reallocated_event_count, current_pending_sector, reallocated_sector_ct, etc are all fine and all have a raw_value of 0.
I would certainly like to hear your thoughts as always.
No, the SAS back panel will also have the single SFF-8087 port - it will look the same as on the Dell H200.
You just need a regular cable like this:
https://www.amazon.com/Cable-Matters-Internal-Mini-Mini-SAS/dp/B011W2F626/