(Part 3) Best products from r/homelab

We found 144 comments on r/homelab discussing the most recommended products. We ran sentiment analysis on each of these comments to determine how redditors feel about different products. We found 3,145 products and ranked them based on the amount of positive reactions they received. Here are the products ranked 41-60. You can also go back to the previous section.

Top comments mentioning products on r/homelab:

u/evemanufacturetool · 1 pointr/homelab

So many questions.

> My home LAN right now is an Asus RT-AC66U, 2 desktops, printer, and my freenas Bound together by 2Gb 8 port switches. I also just built a little ESX server. I wanted to virtualize server OSs, and manage a small domain environment. I also wanted to keep it mostly separate from normal my LAN.

Presumably that link bonding is client side? What switch is it?

> I don't know how to go about doing this. I think this is a situation where a vlan and pfsense are used. I want to be able to control my vms from my primary desktop I also want my Vms to be able to reach the internet.

VLANs allow you to have different networks over the same physical. For example, you could have two VLANs running down the same cable back to your router and your router has two interfaces, each listening for their respective VLAN tags. You can then add firewall rules to each interface as if it were a physical NIC but it's all software.

What you're after is subnets on top of VLANs. Just have your VMs on a different subnet to your main network and traffic between them will have to be routed (by your router) so you get firewall rules etc.

Having the VMs able to reach the internet is as simple as giving them a route to do so.
> Will i need to add a 2nd NIC to my desktop to access two networks?

No. Unless you want direct access to your VM subnet by having an IP within their range, access from your desktop will be just as it is with any other IP. It'll get routed to the correct location by your router.
> pfsense needs dedicated hardware right? is this good enough http://amzn.com/B00OY8Q0QC[1]

Not necessarily. I've seen many on here running it virtualised but there are those who prefer to run something like that as a dedicated machine. As for hardware, unless you have 500Mbps+, you won't need anything that expensive. If you have an old machine, those can often work just fine for a home connection. For reference, I have a Pentium G3220 with 4GB DDR3 RAM which handles by 1Gbps FTTH connection with room to breathe.
> I will need to buy a smart switch right? Any tips on how to shop for one? I dont really know what to look for.

Why? Most dumb switches will handle VLAN tags just fine. However, if you want to do it like enterprise does, you'll want a managed/smart switch. They will allow you to do VLAN tagging at the switch side so no extra client configuration is needed.
> I'm using the trial version of Vmware stuff. So i have a about a 60 day window. Should i make the switch early Proxmox? Which is worth more professional development wise?

I'm fairly sure ESXi is free. The management program is paid but the free version of it should do everything you want.
> Bonus objective: I sold my HTPC to afford most of this hardware. Is there a way I can connect a virtual machine to my tv to playback media? I'm using chromecast/ freenasPLex which is working fine enough. But i am curious.

Maybe but I don't think it'd be easy. You'd be better off getting something like a RaspberryPi and using rasplex with a PLEX server. That's what I do.

u/chuck1011212 · 1 pointr/homelab

You are the one asking for solutions. I honestly think you are making too much heat if you are consuming 350 watts for your SAN and what looks like two servers plus the UPS.
Consumer grade UPS units also have replaceable batteries, but it sounds like your mind is made up on that. No problem.
A 12 bay synology with optional expansion unit can support 240TB of space. That should be enough to last you for a year or maybe two..... haha

Unit:
https://www.amazon.com/Synology-Station-Network-Attached-DS2415/dp/B00SWEM4DW/ref=pd_sim_147_1?_encoding=UTF8&pd_rd_i=B00SWEM4DW&pd_rd_r=82Q1XT3SWR3XHNBA3RE0&pd_rd_w=Abcab&pd_rd_wg=jqDSi&psc=1&refRID=82Q1XT3SWR3XHNBA3RE0

They also have rack mount units if that is more your style. Rack mount gear is less flexible though if you decide your rack needs to go or you want to resell your devices. Non rack mount Synologies hold their value better than nearly any IT gear I have ever seen.

Expansion Unit:
https://www.amazon.com/Synology-Station-12-Bay-Expansion-DX1215/dp/B00QMVGBNQ/ref=sr_1_17?s=pc&ie=UTF8&qid=1491247179&sr=1-17&keywords=synology

These draw between 37 and 75 watts of power depending on utilization.
You could then run several Intel NUC 6th gen I3 ESXi servers with 32 gigs RAM each. (NUCs are my current love interest for home lab ESXi hosts. They are small and silent workhorses.)

These draw 10 - 45 watts of power each.

So lets say you sell what you have and do this. What would be the power consumption and heat dissipation difference?
You are currently burning 350 watts. (I assume that is near idle power draw.)
With 2 NUCs and 1 Synology, you would be burning less than 100 watts 95% of the time. That is a huge difference in BTU of heat to dissipate, plus you would have less noise. I guarantee that.
Also, the synology has the ability to run a plex server as an app among other app running abilities. This 'could' get you additional flexibility if you used them instead of VMs in some cases and could allow you to use less VM resources should you decide to do so.

In addition to these suggestions, I would purchase enough RAM for your current or future setup to be able to shut down one or multiple hosts when not active in your lab. You could do wake on LAN to easily crank them up as needed where needed during lab testing workloads.

u/TheBloodEagleX · 1 pointr/homelab

Funny enough now that I'm doing a quick search to compare more, I'm finding okay ones below $300 for US. The prices seem to be overall similar. When I was looking I would often find what I wanted but at $400 up to $900+ (StarTech, Tripplite, etc) and easily found UK ones for below £250 ($335.42ish?). I was mainly looking at short depth but above 8U and only seemed to find good options abroad, like this:

UK (really like it): https://www.amazon.co.uk/19-Inch-Server-Rack-Cabinet/dp/B005TI4JDM

UK: https://www.amazon.co.uk/19Power-Standing-Server-Cabinet-Tempered/dp/B010MFB5C4

UK: https://www.amazon.co.uk/Mount-Server-Cabinet-Tempered-Glass/dp/B01MYXKLX5

US: https://www.amazon.com/StarTech-RK1236BKF-Knock-Down-Cabinet-Casters/dp/B006ZLV5HA

US: https://www.amazon.com/Tripp-Lite-Enclosure-Cabinet-SR24UB/dp/B0043WF9E8

I also love these 10" rack cabinets, which would work great for NUC style cases or custom whiteboxs with mini-ITX but I can't find them anywhere in the US: https://www.amazon.co.uk/Inch-Network-Cabinet-Tempered-Glass/dp/B0106RQP9G/ref=pd_sbs_147_4

u/EnigmaticNimrod · 5 pointsr/homelab

My scheming from when last we spoke appears to be paying off.

I've taken a single Supermicro X9SCL-F board and put it into a server that I'm currently using as a super-simplified SAN - CentOS on a small SSD with a ZFS mirrored vdev pool totaling 2TB for VM storage. I've tested the Dell 0KJYD8 cards that I had lying around with some SFP+ receivers that I bought on eBay in various configurations, and everything seems to work well. It looks like it's time for me to move on to Phase 2 of my plan :)

In preparation for Hurricane Florence (I live close to the east coast) I also went ahead and splurged on new batteries for all 4 of my UPSes - two Cyberpower 1500PFCLCD's and two APC Back-UPS Pro 1500's. I think, once I get the proper cable from Amazon to tell the APC's that they have new batteries and thus report an accurate remaining time to me, I will use those in my homelab, particularly because I can purchase battery expansions for these models to get even more runtime out of them. I'll likely use the Cyberpower UPSes for mine and my partner's desktop rigs. This was a relatively expensive purchase (compared to how much I've spent on the rest of my homelab), but it's definitely going to be worth it to be able to actually trust my UPSes in case of brownouts/blackouts going forward.

With all of that said, here's everything that's currently in my homelab:

Current Hardware


  • Whitebox SAN/VM Storage
    • Supermicro X9SCL-F
    • Xeon E3-1230
    • 16GB DDR3 ECC
    • 64GB Sandisk SSD - CentOS boot drive
    • 4x1TB spinning HDD's - 2x mirrored vdevs for 2TB usable
    • Dell 0KJYD8 2x10GbE NIC
    • Services/VMs running:
      • ZFS exporting datasets for VMs on the (currently only) hypervisor
      • OPNsense VM (master) - 2x NICs from the mobo passed through to the VM (means that technically this box is airgapped, which for a SAN is okay by me)
  • Whitebox Hypervisor 01
    • Shuttle case + mobo
    • Core i5-4670
    • 32GB DDR3
    • 64GB mSATA SSD - CentOS boot drive
    • Dell 0KJYD8 2x10GbE NIC (direct connect to SAN)
    • VMs running:
      • apt-cacher-ng - apt and yum caching server for other systems
      • many more planned but not yet implemented :)
  • Whitebox NAS
    • Generic case (will soon be replaced)
    • AMD FX-8320E
    • 8GB DDR3
    • 2x16GB Sandisk flash drives - ZFS mirrored vdev for FreeNAS OS
    • 6x4TB spinning HDD - 3x mirrored vdev for 12TB usable
    • Used as a target for backups, media, etc
    • *may* eventually get a 10GbE card if I ever wind up with a 10GbE fiber switch... whenever that happens. :P

      // todo (immediate)


  • Purchase rackmount cases and accessories for existing hardware
  • Purchase more Supermicro boards and replace other hypervisor hardware with them
  • Build a bigger rack (I've been inspired by posts around here of others building their own racks, and I figure I can give it a shot too)
  • ...actually get around to playing around with various homelab services :)
u/CollateralFortune · 1 pointr/homelab

I highly recommend a Fractal Design or Lian Li cases. After going through a number of cases, external DASs, etc for my 250TB+, those are the only cases I've been truly satisfied with. I highly regret the SilverStone cases I've purchased. This is based on the factors of cooling ability, noise level, and ultimate cost.

From a cooling perspective, the FD cases can't be beat. You can stuff those so full of huge (and quiet fans) that your drives will be positively arctic.

For building a DAS, all you need is a JBOD power board. They go from really simple to really complex, with IPMI, fan headers and all the fixens.

Then something like this with some breakout cables and you are set.

Obviously that's a bit more than the $70 are you talking about though. But your duplicator case method would be limited on bandwidth vs SAS which would run the drives at full speed. But that's the method I use to put together my DASs now.

u/wet-hands · 1 pointr/homelab

I don't know of any tops that fit on top of that rack. Normally open frame racks aren't meant to have any top whatsoever so not many people make them AFAIK. If you want an "open frame" rack with a top, you're best off buying an enclosed rack and removing the side panels and doors. The closed rack equivalent I'd recommend from startech is RK1236BKF

u/Havage · 16 pointsr/homelab

Hey! So what I did was buy this controller and then add on the fans wherever I wanted them. These parts are made for audio visual racks so they look good and are quiet so it's not an eye sore. I highly recommend looking at "AC Infinitys" other stuff as they have some new fans that I like. Easiest solution for you might just be something like this.

u/WordBoxLLC · 1 pointr/homelab

Shelving? Just find whatever floats your boat and can hold the weight.

Rack? https://www.amazon.com/StarTech-com-Rack-Enclosure-Server-Cabinet/dp/B006ZLV5HA?th=1

No experience with that one, but Startech makes pretty good racks for the price range.

Storage? Dell storage. I've only read that it's pretty open when it comes to DAS. I ran a whitebox freenas (iscsi) for an r710 (starting from ground up with the r710 again).

u/Onlythefinestwilldo · 16 pointsr/homelab

Now that you mention it, I'd be curious too. I'll tally it up and get back to you all.

Edit: here it is!


Thing |Price | Quantity
---|---|----
Belkin Power Strip | 30 | 1
Raspberry Pi 3 B+ | 38.30 | 2
Miuzei Raspberry Pi Cooling Case Kit | 25.99 | 2
Netgear 8 Port Gigabit Switch | 17.99 | 1
WD 2 TB External Hard Drive | 59.99 | 4
KingDian 8GB SSD | 10 | 1
Mitac PD12TI CC Mini-ITX Motherboard w/ Intel Atom D2500 CPU | 149.99 | 1
Mini-Box picoPSU-80 80w 12v PSU | 28.95 | 1
Sabrent 12v AC power supply | 10.98 | 1


Total: $616.45

I was doing pretty good until I got to the damn WD hard drives. I suspect I paid way too much for how good they are. Probably could have saved some money by making an enclosure and using real hard drives or something

u/caiuscorvus · 2 pointsr/homelab

Very roughly speaking I would say 2 cores each to gaming, windows 7, plex(1-3 users), and everything else. Especially if you use LXC for nextcloud, plex, and the sandboxes. Also, this depends on your gaming :)

(For LXC, look at using proxmox or straight ubuntu for your hypervisor. If Ubuntu, you can game on the hypervisor. Or, as with proxmox, passthru your GPU to a KVM machine for your daily driver/gaming.)

But consider your usual CPU load. A low-use windows 7 machine wont use but 10-30% of 2 cores unless you do some heavy lifting. And if your plex machine is for yourself, or often for yourself, then you won't be gaming and transcoding as much at the same time.

All this is to say that I bet an 8-core threadripper would be enough. I would say a 12-core threadripper would give you plenty of room for expansion. Oddly, they're the same price on amazon right now.

An 8-core Ryzen 7 would work as well but with fewer pcie lanes and other good server features.

For RAM...maybe 6 for gaming, 6 for windows, 2 each for plex, file server, nextcloud and 1 per each sandbox container. Probably 32 GB to be safe. +16GB if using VMs rather than containers, +16GB if use is heavier or if you want to use ZFS.

u/Ghan_04 · 1 pointr/homelab

The Ryzen Threadripper 1920X is on sale for $200 these days, which is a steal for 12 cores. It might be an interesting option for this use case.

https://www.amazon.com/AMD-Threadripper-24-thread-Processor-YD192XA8AEWOF/dp/B074CBJHCT

It also may have more reliable ECC support - this may come down to the motherboard manufacturers. I've heard spotty things about the mainstream Ryzen CPUs. The chips can do it, but it's not advertised and not really official. Might find something useful here: https://forums.servethehome.com/index.php?threads/verrified-list-of-threadripper-x399-motherboards-that-function-in-ecc-mode.19436/

Best thing would be to find a motherboard that someone else has confirmed will work with ECC.

u/DynamicBits · 1 pointr/homelab

I only see two items that are actually rack mountable. One thing to consider is a vertical wall mount bracket for the Netgear switch and patch panel. These brackets can be used horizontally as well, so you could even mount them to the bottom of one of the existing wooden shelves. Once the switch is taken care of, everything else can be mounted in a much shallower space.

If you want an enclosed wall mount cabinet, the Tripp Lite SRW12US and
Tripp Lite SRW10US both support a mounting depth of up to 20.5". If you go this route, make sure the antennas on the wireless APs are located where their signal isn't blocked.

For about $100, you can get the Tripp Lite SRWO8U22 2-post open frame "cabinet," or the Kendall Howard 2-post 8U rack. Both support up to 18" mounting depth.

The StarTech RK12OD desktop 2-post rack for $46.99 is an interesting alternative to normal racks. Due to the slope, you want to be sure any equipment on a cantilevered shelf is somehow prevented from sliding off. Just set the DS1813+ at the bottom, between the posts.

With any rack/cabinet, you're probably still going to need a couple of cantilever shelves to hold the non-rack mountable equipment. With an enclosed cabinet, you can use the bottom and top as shelves. You could even cheat and put some of the lighter items on the Netgear switch.

Also, be careful how you stack the equipment that wasn't designed to be rack mounted. A lot of it will vent the heat up instead of out the back.

Until you put an air duct in the closet itself, I doubt there will be much circulation in there. Be careful about putting all of the equipment up high because all of the hot air will be more or less trapped above the door louvers.

u/cjalas · 2 pointsr/homelab

In that case, you could get away with two 2u cases for the servers;
Let’s say 2u for the network gear;
Maybe another 2u for a UPS if you want to protect your gear from outages;
Add in some more space for future expansion....


I’d go with a 9u rack based on your hardware.


As far as hot swap setup for your drives, you could get something like this: https://www.amazon.com/ICY-DOCK-Rugged-HotSwap-Backplane/dp/B00TL4US8K/ and throw it into a 5.25 bay on a 2u server case, like this: https://www.amazon.com/iStarUSA-Server-Chassis-Cases-D-214-MATX/dp/B00A7NBO6E/


That’ll have a SATA backplane, and you can just pass-through the SATA connections directly to your mobo or a SATA card — no need for a SAS/SATA HBA Controller. Simple and cheap. They make hotswap cages for 3.5” drives, too. Gotta hunt around amazon a bit for those. Good luck!


P.S. check to see if your network gear has any screw holes on the sides — sometimes smaller network stuff can be rack mountable and just needs additional hardware (rack ears) from the manufacturer. Usually its 4 small screw holes per side.

u/phr0ze · 1 pointr/homelab

This is a great box. Jetway Intel Celeron N2930 Quad Core Dual Intel LAN Fanless - HBJC311U93W-2930-B https://www.amazon.com/dp/B00OY8Q0QC/ref=cm_sw_r_cp_awd_Uxrxwb82WSRFD

Just add ram and a small msata card. Price is around $250 after components and the system is blazing fast with true intel NICs. But dont plan to use the wifi in it. Its not capable of being an access point. You can get a different wifi for $20 if desired. None the less remove the included wifi to save heat and power.

u/wolffstarr · 2 pointsr/homelab

Does it need to be 3U? You won't get a full-height card in a 2U chassis without a riser; my full-height cards barely clear my 3U's lid. If it doesn't need to be 3U, then the 4U Rosewills are probably your best bet. 3 5.25 bays, and the front fan units can be replaced with 5.25" cages if so desired; the same basic chassis also comes with some of their 5.25 to 3.5 conversion bays as the RSV-4500 chassis.

If you can deal with an mATX motherboard and figure out half-height cards, this iStarUSA 2U might fit the bill.

u/fusion-15 · 11 pointsr/homelab

If she has a few thousand to spend, then I would go with this Synology 12 bay NAS along with 12 WD Red 6TB HDDs. That brings the total to ~$4,638. If you configure all disks into a RAID 6 volume, you end up with a little over 54TB of storage. I would also strongly consider, eventually, looking into an offsite backup solution as well [remember...RAID is not a backup].

To get the most out of it, you might want to consider getting a managed switch (just L2 is fine) so you can configure the ports on the NAS in a LAGG. You'd also want to invest in a decent wireless AC router. If the iMac is close enough, you could also just connect to the network via Ethernet (or directly to the NAS).

u/bobadad23 · 3 pointsr/homelab

K.I.S.S. Is my motto. If you dont need anything more than basic switch functionality your best bet is a plug and play non-managed switch. You can get a nice Netgear 8-port off Amazon for $19.99.

​

https://www.amazon.com/gp/product/B00KFD0SEA/ref=ppx_yo_dt_b_asin_title_o03_s00?ie=UTF8&psc=1

u/StartupTim · 1 pointr/homelab

Thanks for the response!!

So I was doing a lot of research... Do you think this would be a great solution for hardware? https://www.amazon.com/gp/product/B00FN1OQVA/ref=oh_aui_detailpage_o02_s00?ie=UTF8&psc=1

I have one of those laying around and it meets all my physical requirements (lots of network ports, rack mountable, cost under $1k).

u/williamj2543 · 1 pointr/homelab

Well I decided just to use my ATX server (might get a new MOBO and CPU) and just use that for the storage array. The R310 is strictly for storage, with minimal archiving and backup scripts. So with the ~300 I save for not getting the r310 I can get the dell rails, more hard drives, and SATA expansion cards.

By the way, how am I supposed to mount a whitebox ATX server? Its a standard 2u that is decently heavy and decently deep, but I have no idea how you would mount something like that. I checked amazon, and would these things work?

https://www.amazon.ca/iStarUSA-TC-RAIL-24-24-Inch-Sliding-Rackmount/dp/B002XIR8SE/ref=pd_sbs_147_2?_encoding=UTF8&psc=1&refRID=71FHJ7020GE3DAQ83SY0

https://www.amazon.ca/Tripp-Lite-Universal-Adjustable-4POSTRAILKIT1U/dp/B00AO1W2F6/ref=pd_sbs_23_9?_encoding=UTF8&psc=1&refRID=8BZ40N5H3FTD6Z41G8TG

Its for this server

https://www.newegg.ca/Product/Product.aspx?Item=N82E16811128060&_ga=2.237138880.1975695971.1494960025-1092768654.1479255402

Thanks!

u/Hewlett-PackHard · 3 pointsr/homelab

Build a tower with a Titanium rated PSU, a low wattage embedded ITX board from Asrock Rack or Supermicro w/ ECC support, a 9211-8i in IT mode and 2.5" 4TB Seagate drives (can be had for $100 by shucking externals). Really low wattage even with the drives spinning 24/7.

Here's a nice little drive bay.

Here's a lower mobo that would works great FreeNAS

Shuckable Seagate 4TB 2.5"

So, wattage wise, the drives idle at 1.1W each, the SAS card is about 8W, the CPU is a 14W TDP so I'll use that for the whole mobo's idle... looking at ~32W idle for the whole system? Maybe 40W at most. Also low noise.

Edit w/ moar partz!

Here's a decent and cheap Titanium PSU, very efficient. 8.2 by JonnyGuru.

Cheap Dell H310 HBA which can be reflashed to a 9211-8i IT mode for use with FreeNAS.

u/brkdncr · 9 pointsr/homelab

You don't know what you're doing, and while this is a great thing to do for yourself, you shouldn't be doing that to paying customers.

Here is a commercial device that comes with a warranty, works well, and is fairly simple to set up:

http://www.amazon.com/gp/product/B00SWEM4DW

yes, it's going to cost more than your home built unit, but it comes with a warranty, support beyond just you, tested hardware, and software to handle performing those backups. It also eliminates the need of a boot device.

There are other budget-oriented storage hardware providers too, and another thing you can do is call them up and ask them to help configure the cheapest option (for instance, an 8-bay unit with an expansion unit using more 6tb drives may be cheaper than a 12-bay using 8tb drives.)

u/el_buzzsaw · 2 pointsr/homelab

I ordered one of these racks about 2 years ago when I wired up my house and finished my home office:

https://smile.amazon.com/gp/product/B01A6JQV8Y/ref=oh_aui_search_detailpage?ie=UTF8&psc=1

my only hangup on it (and you can actually see it on a review) was that the screws that came with the rack were junk, and I had to go out to get new ones that actually would fit right.


also using:

this patch panel - https://smile.amazon.com/gp/product/B0072K1OWY/ref=oh_aui_search_detailpage?ie=UTF8&psc=1

this shelf - https://smile.amazon.com/gp/product/B008LUW4CI/ref=oh_aui_search_detailpage?ie=UTF8&psc=1

and this cable manager - https://smile.amazon.com/gp/product/B01HJTTOH4/ref=oh_aui_search_detailpage?ie=UTF8&psc=1

​

wish I had photos of the little get-up in my closet for you. I've got a basic 4-outlet battery backup powering my ISP modem, my router, managed 24 port switch, and the seagate single-drive NAS we use.


one day i'd like to get one a bit bigger so it can also support the pi server that's also sitting on that shelf and the POE injectors for my APs.

u/b1g_bake · 3 pointsr/homelab

I'm using Acurite temp/humidity sensors and a USB RTL-SDR to sniff the signals. Then a neat little piece of software called rtl_433 decodes the radio signals and can output in json format over mqtt. I have home assistant listening to the topics and just view the data there. I'm sure there is an easy way to get data into grafana as well. I ran that setup on a Rpi no problem but have since switched to a NUC and things are still going great.

u/CanuckFire · 3 pointsr/homelab

If you want something quick and easy, spend a bit more money and get these. Ive used versions of them before and they are super convenient.
(If your cabling comes into the box from the top, just install the plates upside-down so the cables dont bend in the box).

https://www.amazon.ca/Platinum-Tools-100010C-Connectors-Clamshell/dp/B000FI9VU2

https://www.eagleg.com/products/cat6-right-angle-keystone-coupler-white?variant=18620127543353

u/tekwreck89 · 2 pointsr/homelab

Yeah, I found a 1U vent piece I'm going to grab two of those and put on each side of the cabinet

Something like this -
http://images.canford.co.uk/Images/ItemImages/large/16-927_01.jpg

Or if I start adding more stuff I might get a case fan and put that on the cabinet -
http://www.amazon.com/AC-Infinity-AI-CFD120BA-Quiet-Cabinet/dp/B009CO543S

It's going to be stained black so the pieces will blend in nice.

u/darkciti · 1 pointr/homelab

Thanks. I'm thinking I can use a 4 port card (but the H200 is only 2 ports) and break 2 of them off to an external SAS adapter like this.

Now I'm just wondering if the performance would be better with 2 cards or 1 card with 4 ports.

u/somuchmoresnow · 4 pointsr/homelab

This is what I'm using for pfSense.

It's actually a really awesome little machine, I'll be ordering more for sure.

You can still use your OpenWRT box as a wifi access point behind the firewall.

You'll want a decent switch to hook it all together.

u/albatrossLol · 1 pointr/homelab

There are vents and mounting holes for fans - 2 on top and 2 on bottom.
Here’s the rack

Couldn’t find any good small options locally. So I didn’t get any Craigslist’s steals.

These fans I’ve read good comments on cfm to noise ratio
Noctua Fan with Focused Flow and SSO2 Bearing, Retail Cooling NF-F12 iPPC 3000 PWM

u/mspinit · 1 pointr/homelab

These are nice and easy:
https://www.amazon.com/NavePoint-Universal-Mount-4-Post-Compaq/dp/B00XXDJASY

It looks like they cross into the U below them, though.

u/andre_vauban · 1 pointr/homelab

Your plan sounds mostly reasonable. A few suggestions though.

​

Try and move your central termination point/patch panel to somewhere inside the house. The high temperature and humidity (ie condensation) in a garage don't often play nice with electronics.

​

For the ISP connection, I would just extend a cable (preferably both RG-6 coax and cat6 for future uses) outside to their demarc and run it back to your main wiring hub. Your solution would work, but might as well run it back if you are putting in the work.

​

Run some more cables to ceiling locations for you to put POE wifi access points.

​

Don't have two switches, just have a single switch with some POE ports. That interconnect between the two switches will become a bottleneck if you have any significant amount of intra-house traffic.

​

Ubiquity is not overkill for your setup, but I would be hesitant to put gear in that price range in the garage. If you cannot get out of the garage, you might want to make sure to use gear that you can afford to replace when it breaks.

​

When you say " head-height cabinet ", I assume you mean a proper 4 post enclosed server rack, something like: https://www.amazon.com/Tripp-Lite-Enclosure-Switch-Depth-SRW12US/dp/B001TGUYI2. If you meant kitchen cabinet, don't do that :P

​

If you have the walls open, run 2-4 times more cables than you think you need to each wall face plate. Also, look into using conduit, so you can easily pull new cables through in the future. If cost becomes an issue, you can run the cables outside the conduit and run empty conduit for future use. If anything happens with the old cables, just abandon them in the walls.

​

Run RG-6 coax, fiber, speaker wire, or any other low voltage cabling you can think of while the walls are open as well.

u/45Deputy · 1 pointr/homelab

I've been using these lately: http://www.amazon.com/Jetway-Intel-Celeron-N2930-Fanless/dp/B00OY8Q0QC/ref=sr_1_1?ie=UTF8&qid=1452191765&sr=8-1&keywords=jetway

Amazon says they only support 4GB of RAM but per the JW website and, I can attest, you can put 8GB in.

u/eng_knight · 1 pointr/homelab

I have had good success with shucking these... occasionally they drop below $100, not bad for a 4TB drive.
https://www.amazon.com/gp/product/B00ZTRXFBA/

They show up as ST4000LM024, which are PMR SMR(thank you mj_turner for the correction) drives.
They are 15mm drive height but that works fine for most if not all server disk trays.

I have also used these
https://www.amazon.com/gp/product/B01LX13P71
not bad for the price and density.

u/soawesomejohn · 4 pointsr/homelab

This is kind of like when I first started out, I was rather excited about ez-rj45, but I soon learned they're not worth it.

The ends are expensive. You need the more expensive crimps. If you really want to spend money to save time, just buy patch cables. The only times I make my own ends anymore is if I'm making a custom connector (usually for ham radio) or if I need to run the cable through a small hole.

u/trogdorr · 1 pointr/homelab

Those look interesting. Was hoping for something a bit cheaper.

I found these. Although not tool-less, they dont require cage nuts because they have threaded holes.

https://www.amazon.com/Universal-Adjustable-Enclosure-Cabinet-4POSTRAILKIT1U/dp/B00AO1W2F6/ref=sr_1_1?s=electronics&ie=UTF8&qid=1497567174&sr=1-1&keywords=tripp+lite+rails

u/mahkra26 · 3 pointsr/homelab

I bought a 24-bay supermicro 2u case with an old AMD motherboard in it and gutted it into a JBOD array with the help of a few small adapters, like so:

  • There's these (with nothing to remove thankfully) on ebay right now: case
  • Install this in place of a motherboard: JBOD module
  • you'll need is a 8087 to 8088 adapter
  • You might need some 8087-8087 cables

    Topology is: SAS expander backplane top and bottom ports (ignore middle) to the two internal ports of the low profile adapter via two 8087 cables, then a standard e-SAS (8088) cable to the LSI 9207-8e in my server from the external ports.

    This has worked out fabulously for me.

    For added comfort (aka noise and power consumption), I removed the stock dual power supply that the 2u case included and replaced it with the guts of a 230w atx power supply, since I don't have dual sources. That cut the power draw down by ~80w or so. I also replaced the fans with much quieter ones (standard ~50 CFM 80mm units) and then improved airflow by taping over holes with masking tape, and using a thick paperboard to block other areas - the main purpose being to force the airflow through the drive bays.


    Edit:
    If you prefer LFF drives, there are 12-bay 3.5" already assembled with all the necessary parts from ebay: http://www.ebay.com/itm/222338813833
u/iamwhoiamtoday · 1 pointr/homelab

I have a similar chassis and was faced with the question of which drives to put in it. I ended selecting these.

They are 4TB 2.5 Seagate drives, and mine have been rock solid so far. They use minimal power, produce minimal heat, and generally perform well enough for my purposes. Been picking up 1-2 of them every other paycheck and am gradually expanding the array. :)

u/seizedengine · 3 pointsr/homelab

You can also buy adapters if you have trouble finding a card like the 9207-4i4e

SFF-8087 vs SFF-8088 do the same thing, just SFF-8088 (external) are larger and much more durable. So converting between them is easy and safe.

https://www.amazon.com/CableDeconn-SFF-8088-SFF-8087-Adapter-bracket/dp/B00PRXOQFA/ref=pd_cp_147_1?_encoding=UTF8&pd_rd_i=B00PRXOQFA&pd_rd_r=GE4FNVAZ8Q0AGPSFF88G&pd_rd_w=88BoN&pd_rd_wg=kiVIF&psc=1&refRID=GE4FNVAZ8Q0AGPSFF88G

u/unfadingpyro · 1 pointr/homelab

For Cat 6 outdoor I've used this brand: VIVO Black 500 ft Cat6 Ethernet Cable 23 AWG/Wire 500ft Cat-6 Waterproof Outdoor/Direct Burial/Underground (CABLE-V012) https://www.amazon.com/dp/B00GYGQ31E/ref=cm_sw_r_cp_apa_i_dSzZCb1DS5JQR

Really good quality cable. True Cable on amazon also seems to be a good quality, but I've not used it personally.

Patch panel I have: [UL Listed] Cable Matters Rackmount or Wallmount 24 Port Cat6 Patch Panel (Cat 6 RJ45 Patch Panel) https://www.amazon.com/dp/B0072K1OWY/ref=cm_sw_r_cp_apa_i_IUzZCbMJ3VPYZ

u/ProofPool5 · 2 pointsr/homelab

As it usually is, you got to pick your battles. The Dell systems are cheap because so many exist that the supply out weighs the people looking to buy them. Businesses just buy new, and there just aren't enough homelabbers.

If you want small and portable (why the heck you want a portable server is beyond me, but whatever) then build it yourself. I doubt you need all the R720 redudancy or you wouldn't be looking at the SuperMicro, and the SuperMicro is basically just an off the shelf desktop in a nice case.

You can also look at the older HP MicroServers but they're not powerful.

I would just buy the SuperMicro case https://www.amazon.com/Supermicro-Superchassis-CSE-721TQ-250B-Mini-Tower-Supply/dp/B00REWHHNU for $184 and I'm sure you can spec it out yourself for way less than they want for a pre-built one. You could still probably buy an entire Dell server for the price you're spending on just the case, but you're not getting a compact server like that for the price of an off-lease Dell.

u/yeagb · 1 pointr/homelab

Platinum tools makes something like that: Platinum Tools 100036 EZ-RJ45 Cat6 Strain Relief, (Clear). 50/Bag.(Pack of 50) https://www.amazon.com/dp/B00939KKX6/ref=cm_sw_r_cp_apa_gDjGAbZH3BZ0A

And: Platinum Tools 100010C EZ-RJ45 Cat 6+ Connectors, Clamshell, 50-Pieces https://www.amazon.com/dp/B000FI9VU2/ref=cm_sw_r_cp_apa_mEjGAbCVFZD7F

But they have their own crimper that cuts off the excess wire. I've never used them but I know people who do and they like them.

u/Lee_Ars · 3 pointsr/homelab

> how would the NAS be connected to the server? Ethernet, eSata, magic?

The "NA" in "NAS" stands for "network-attached," and there's your answer: Ethernet. You can use nfs, smb, iscsi, or any other network storage protocol that matches your requirements. If you're not sure what you want to use, you need to first decide what you're trying to accomplish here—what exactly you're going to use this NAS for. That answer will then inform how you want to set up your shares.

>If I move all my media to a separate NAS enclosure, how is that handled? Do I need to have a cpu/mobo/mem in the NAS, or am I still able to use my server?

I run a plex server on a mac mini HTPC, and I keep my plex media library on a NAS in the other room. They're connected via gigabit ethernet and the server doesn't care; transcoding, streaming, and all other operations work normally. Transcoding is done using the HTPC's CPU.

Think about it this way: the only thing you're moving is the physical location of the media. The server's going to work the same way whether it has to get its media from a locally attached hard drive or a network share. The server don't care.

> are there NAS enclosures that can handle the number of drives I'm using, without getting into the crazy price range (crazy for me is over $400-500)?

Prebuilt NAS systems, like a synlogy or qnap? As a rough guide to consumer NAS pricing, you can estimate that you're going to pay around $100 per disk slot. 8 bays is going to cost you a bit under a grand. From there, they tend to jump to 12 bays and get pretty silly.

If your budget is $4-500 and you have a hard requirement to hold 9 disks, you're either consigning yourself to DIY or you're going to have to start hunting through ebay for used NAS gear. Or revise your requirements down to a 4- or 5-bay NAS.

u/MikeWaz0wski · 1 pointr/homelab

If you don't mind "sit" style 3rd party rails, these have worked well for me -- I've a heavy 3u box (Supermicro SC836E2-R800) on these, and they don't sag/bend at all. The chassis has some screws through the front handle area to secure it into the rack.

u/stupac62 · 2 pointsr/homelab

They are not "so expensive". Look:
normal patch panel-$37
and the Keystone patch panel-$19

This implies the "keystones" included in the integrated patch panel cost $18 for 24 of them, or $0.75 a piece.

Actual keystones-$27 or $1.08 per keystone.

So, the Keystones + Keystone Patch Panel is $7.92 more expensive than the integrated patch panel. This is easily worth the cost. If I want to move a terminated cable, I just release the keystone and move it. Now think of the integrated patch panel.

Edit: formatting.

u/port53 · 1 pointr/homelab

https://www.amazon.com/gp/product/B00JFR2I2C/

This is what I use. Works great. I have this in 710s and a 510.

u/SirMaple_ · 2 pointsr/homelab

I just put one of these in my Supermicro CSE846 running Proxmox and it works great.

u/asshopo · 2 pointsr/homelab

Get a ssd/2.5" hdd cdrom adapter + cable like /u/Maggen96 mentioned or a pcie ssd card like this. Boot off that. Then your HBA is available to pass into Proxmox/ESXi/etc for a FreeNas/OMV/whatever NAS.

u/IridiumElement · 2 pointsr/homelab

I utilized this with success. Since I have ESXi running on a usb I had to inject the correct drives into a new iso and reinstall esxi.

u/spanky34 · 1 pointr/homelab

I'm working on doing this now. I have a closet under my stairs (bi-level house). The lower level is half underground so it stays pretty cool down there. I plan on getting at least two sets of these and finding some A/C vent grates to dress them up. One pulling in from the bottom of the door and one exhausting out one of the stair risers. If it's not enough, I'll add another set.

In your place you could probably buy your own closet door, install it, and add an intake at the bottom and an exhaust at the top. Then when you move, replace the original so they don't complain about it matching the other units.

u/punzada · 2 pointsr/homelab

It would probably be overkill however I'm running pfsense virtualized on one of these and it's fantastic for my uses Supermicro SYS-5018A-TN4 . Pros: low wattage vs performance, near silent operation, IPMI, 4 intel NICs. Cons: Uses pretty odd ECC so-dimms that aren't typical, cost.

A cheaper solution would be the the netgate 2440 and may actually make more sense since my Supermicro really shine for how many cores they have and as far as I know pfsense is still really single thread limited for basic operations.

u/ndboost · 1 pointr/homelab

you'll want to double check (via google-fu) that the card is supported (I'd be surprised if it wasn't).

In regards to external ports, get one of these its what I am planning on buying when i get my NetApp DS4243 in.

u/Commander-Flatus · 1 pointr/homelab

so here's my self follow up. i tried this card:

http://www.amazon.com/gp/product/B00JFR2I2C?keywords=inateck%20usb%20pcie&qid=1450042134&ref_=sr_1_6&sr=8-6

but no workee. since i had free shipping and returns i figured it was worth a shot b/c it's 11 bucks less than the same companies mac-compatible card and from what i could tell used the same controller, etc.

now i'm ordering this one:

http://www.amazon.com/gp/product/B00I027GPC?psc=1&redirect=true&ref_=oh_aui_detailpage_o00_s00

as it's well reviewed by hackintosh people. if that doesn't work i'll try the high point, but i got burned by one of their RAID cards I bought years ago and I'm still bitter (and a hardhead)

u/drexvil · 2 pointsr/homelab

I asked the same question recently and seems like the Inateck 4 ports card is recommended. Haven't tried it though:
https://www.amazon.com/gp/product/B00JFR2I2C

https://www.reddit.com/r/homelab/comments/6actac/r710_add_in_usb_30_pcie_card/

u/grokdesigns · 1 pointr/homelab

That switch should be fine for getting started and just being able to connect more devices(you can also get it $5 cheaper on Amazon). Your wiring diagram is correct. Your modem shouldn't be a bottleneck at all unless your internet service provides greater than 300Mbps.

u/brokenhomelab · 2 pointsr/homelab

As u/wtallis said, it's a 12U rack from StarTech.

EDIT: just a comment on it if you're considering it. The build quality is pretty inconsistent and a considerable amount of the different holes for the hinges don't line up quite right. I ended up having to correct the holes with my drill.

u/fourlynx · 0 pointsr/homelab

This is correct. You can however get a PCIe-to-SATA adapter or use an adapter in place of the optical drive to make the SSD accessible in a way that is independent from the HBA. Note that the optical drive is connected with a link that does up to SATA-2 / 300mbps.

I have tested this and as long as you use it in BIOS mode it'll work alright. I believe that the device doesn't contain an EFI bootrom, therefore it cannot be used to boot an OS in EFI mode (but it works alright once an OS has been bootstrapped enough)

u/giantsnyy1 · 2 pointsr/homelab

Ok... so that’s way out of the budget.

Has anyone used something like this? StarTech 12U Enclosed Rack

u/dt7693 · 2 pointsr/homelab

I bought these universal rails for my Rosewill chassis and I really like them. I believe they will work with the T620 as long as it's between 17-19" tall ("wide," when laid flat).

u/IncognitoTux · 2 pointsr/homelab

If your NUC is gen6 and later it will support 64GB of RAM. RAM prices are $118-$126 for 32GB. https://www.amazon.com/Samsung-2666MHz-Memory-Computers-M471A4G43MB1/dp/B07N124XDS

u/TitaniuIVI · 2 pointsr/homelab

I don't have an R610, but I do have an R710 SFF. Here's my plan for storage. One 500GB SSD for VM storage (~$100) Then filling up the rest with 4TB ST4000LM016 that can be removed from these (~$110)

My R710 already has an H700, but you can also get an H200 is you want to do software raid. I'll probably be switching to an H200 eventually, but the H700 is fine for now.

As far as storage, the limiting factor is going to be the Perc 6i. Upgrade that first, then you should be able to get a decent amount of storage on there.

u/N------ · 2 pointsr/homelab

Pretty common in rack systems. Granted i have a 47u full depth, but the width should be the same.

Here is a link for an adjustable one.
https://www.amazon.com/NavePoint-Universal-Mount-4-Post-Compaq/dp/B00XXDJASY

u/gx1400 · 2 pointsr/homelab

I got my H200 in last night and after about an hour and a half of trying to figure out why I couldn't get the flashing utilities to work, I read that Dell BIOS doesn't allow the LSI firmware utilities to do their job (specifically megarec.exe).

Popped it into my PC and it worked very quickly. Pulled out the fan adapter and cable cover, removed the Perc6i cables, and fed the new SFF-8087 cables thru the cable feed. Reinstalled and booted up Unraid; it all worked very quickly.

Took about 14 hours to build parity for 4x 8TB (1 is the parity drive) since I didn't wait for preclearing the drives.

I have a couple old 120GB SSDs at home, so I ordered a DVD reader tray adapter and a PCIe adapter.

Those will probably be used as unassigned drives for VMs, though I'll probably pop one in as a cache drive and see how quickly my normal day to day use fills the cache. May transition to using 1 or 2 2TB sata spinners I have sitting around as cache drives.

I'll probably stand up a 2nd domain controller and a 2nd pihole instance this weekend on the SSDs.

u/bobbywaz · 1 pointr/homelab

Mounting a 32.5 inch rack on a wall, especially with hinges would require some serious hardware to keep it on the wall. Put a tape measure to the wall at 33 inches where you plan to put this rack, then rock it like a hinged door.

For one third of the price, you could get something without sides. Many people opt for floor racks on casters for these types of setups:
https://www.amazon.com/12U-4-Post-Open-Rack/dp/B0037ECAJA/ref=sr_1_18?s=electronics&ie=UTF8&qid=1501683124&sr=1-18&keywords=startech+rack

or with sides:
https://www.amazon.com/StarTech-RK1236BKF-Knock-Down-Cabinet-Casters/dp/B006ZLV5HA/ref=sr_1_8?s=electronics&ie=UTF8&qid=1501683350&sr=1-8&keywords=12u+rack

u/sthiffea · 3 pointsr/homelab

I used this:
https://www.amazon.com/gp/aw/d/B01452SP1O/ref=mp_s_a_1_6?ie=UTF8&qid=1494765792&sr=8-6&pi=AC_SX236_SY340_QL65&keywords=ssd+pcie&dpPl=1&dpID=41xHd0vAdHL&ref=plSrch
To install a 256gb desktop ssd I had laying around in my R610. Took half a minute to install. I moved all the windows vms I interact with on it (remote desktop) and the ones i am currently working a lot on. Once they are built and forgotten about, I just live migrate them to the spinning rust. VSphere live storage migration is fun to move vms around from ssd to spinning rust to iscsi, etc.

u/Spaatz1866 · 1 pointr/homelab

This combination worked very well for me recently:
SEDNA PCI Express (PCIe) SATA III (6G) SSD Adapter https://www.amazon.com/dp/B01452SP1O/ref=cm_sw_r_cp_api_KgJJAbBJKV9HD
and
Samsung 860 EVO 250GB 2.5 Inch SATA III Internal SSD (MZ-76E250B/AM) https://www.amazon.com/dp/B07864WMK8/ref=cm_sw_r_cp_api_EiJJAbD40HZ6J

The SSD did not work with the Dell H700 raid card.

u/ZiggidyZ · 1 pointr/homelab

I haven't used the cables yet, but either my cable tester I had no issues with all but 1 of them. That was a fluke more related to the beverage being consumed whist making the cables, one of the conductors didn't make it all the way into the plug before it was crimped, or it pulled out a bit.

I do see what you mean though.

Edit: These are basically what they are. I had the ones with the strain relief pieces too.

Platinum Tools 100010C EZ-RJ45 Cat 6+ Connectors, Clamshell, 50-Pieces https://www.amazon.com/dp/B000FI9VU2/ref=cm_sw_r_cp_apa_Iz-OybE63REYG

u/Bhawk-11 · 1 pointr/homelab

Okay. I have the PERC 6/I. I was gonna use this to connect the ssd without having to muck about with cables.

u/TheDarthSnarf · 2 pointsr/homelab

Yep, I'd say a pair of the AC Infinity AIRPLATE S7 dual 120s one at the bottom for intake, and one at the top for exhaust. Possibly some ducting in the cabinet to maximize proper airflow.

u/HumbleNewblet · 6 pointsr/homelab

M471A4G43MB1

Amazon has it for $126 right now.

Samsung 32GB DDR4 2666MHz RAM Memory Module for Laptop Computers (260 Pin SODIMM, 1.2V) https://www.amazon.com/dp/B07N124XDS

u/seniortroll · 2 pointsr/homelab

https://www.amazon.com/gp/product/B00A7NBO6E/ref=oh_aui_search_detailpage?ie=UTF8&psc=1

38.989 cm depth, I have it and the quality is really good.
I have no clue if it's available in europe though.

u/SoarinFerret · 2 pointsr/homelab

I'm sure there is more, but here is from off the top of my head:

  • Figure out why my DNS is not replicating across my DCs
  • Migrate from my pfSense VM to the Fortigate 60D I just bought
  • Install the two Sedna PCIe Sata III adapters and 1 TB SSDs in my R710
  • Migrate VMs from old server to my R710 onto SSDs
  • Turn Passthrough disks on my file server VM to one big VHDX
  • Decommision old server from prod, sell parts so I can purchase another R710 for a Hyper-V Failover Cluster
  • Update WDS images
  • Build a rack mount for my Fortigate similar to this that doesn't cost $150
  • Checkout Duplicati to replace Crashplan for offsite backups of critical information
  • Look into a new high density disk storage solution
  • Figure out why my ESXi wont boot my latest macOS image

    ...and most importantly: cry because of my electric bill
u/dontthroworanges · 1 pointr/homelab

It's just this iStarUSA rack. I removed the handles for the aesthetic modification I have planned. https://www.amazon.com/dp/B00A7NBO6E/ref=cm_sw_r_cp_apa_i_ovUZCbCMXKH08

u/mrbuttons454 · 1 pointr/homelab

I'm using a Supermicro quad core Intel Atom server running PFSense.

This one in particular: http://amzn.com/B00FN1OQVA

u/Robbbbbbbbb · 2 pointsr/homelab

There are two that I know of:

u/_kroy · 1 pointr/homelab

For now, I was just going to use it as-is. I know it's some X3400 CPU under the hood.

Based on my order history, it was just one of these

u/aselwyn1 · 2 pointsr/homelab

For power there are pcie boards that will take the PCI power but have a sata data connector and will hold the 2.5 drive
https://www.amazon.com/gp/aw/d/B01452SP1O/ref=mp_s_a_1_6?ie=UTF8&qid=1494765792&sr=8-6&pi=AC_SX236_SY340_QL65&keywords=ssd+pcie&dpPl=1&dpID=41xHd0vAdHL&ref=plSrch
I was planning on getting one but have not got around to it and then testing

u/HughJohns0n · 6 pointsr/homelab

Try one of these.
PCI Express (PCIe) SATA III

https://smile.amazon.com/gp/product/B01452SP1O/ref=ppx_yo_dt_b_asin_title_o04_s00?ie=UTF8&psc=1

Using one of these for boot drive with a cheapo SSD in my lab, which also runs on a refurbished 7010.

u/KoopaTroopas · 1 pointr/homelab

I installed an SSD into my R710 using one of these. Gets me full 6GB/s sata speeds, and leaves my 6 backplane slots open for bulk storage

u/benuntu · 1 pointr/homelab

With your needs, I'd take a look at this StarTech 12U. It's enclosed and has options for "wood" panels or metal that wouldn't look too out of place in a living room or bedroom. The wood trim options are spendy, but the glass door 12U comes in at about $400. Still not cheap, but better than most options out there. The open version of this rack is about $200.

u/ninut_de · 1 pointr/homelab

My cases are almost the same: www.amazon.com/dp/B00A7NBO6E/ from a different EU-vendor.
I switched to 2.5" drives in the 5.25" bay, because the 3.5" disks were blocking the airflow above the RAM.

Is the PSU not an obstacle for fullsize atx?

u/7824c5a4 · 4 pointsr/homelab

He mentioned in his last post that it has QSFP ports, and that he would be buying an SFF-8088 to QSFP adapter. No idea how NetApp handles it though.

OP says
> IBM M1015 in IT mode -> SFF-8088 to SFF-8087 adapter card -> NetApp DS4243 via QFSP -> SFF-8087 cables

u/cosmos7 · 2 pointsr/homelab

If you've got nothing you should at least support the box with L-bracket rails. Those eight screws will hold a couple hundred pounds no problem, whereas the box ears won't.

u/Drak3 · 1 pointr/homelab

these bad boys?

meh, if it works, it works.

u/kriebz · 2 pointsr/homelab

Hmm, if some dude really want to steal your heavy, obsolete servers... that would be something. Doubt the cabinet lock would stop him. I’ve seen a handful of server mounted vertically, they all flow so much air it’s not going to be an issue. Of course, if your garage is 110F in the summer, that will be the issue.

If you feel like investing, maybe this:
StarTech.com 12U AV Rack Cabinet - Network Rack with Glass Door - 19 inch Floor Standing Audio Visual Computer Cabinet (RK1236BKF) https://www.amazon.com/dp/B006ZLV5HA/ref=cm_sw_r_cp_api_i_fc.5CbFZ1XRDC