madbrain wrote: ↑Sun Nov 07, 2021 12:52 am
So, the lowest I got from the Ryzen build was about 59W idle, after removing all the PCIe cards except the GPU, and disabling the light show on the motherboard. Turns out those are worth a couple watts ! I didn't try to disable the LEDs on the GPU. Maybe it's possible. This was in Win11.
Old 5820k machine idles at 125W with being fully loaded with all the large case fans, the boot NVMe SSD, 8 SATA SSDs, 4 optical drives, both CPU fans on the cooler, and all PCIe slots also full. Removing the overclock on both the RAM and CPU drove idle down to 110W. That was in Win10.
I'm going to unplug all but the GPU & NVMe and take one final reading to see the lowest it gets, before disassembling.
The 5820k system dropped to 66W idle after all PCIe cards and drives were removed except the NVMe boot drive. This was without OC. Not so different from the 57W low that the 5950X achieved under the same conditions.
I have finally finished the transplant to the big Cooler Master HAF 932 Advanced (HAF X) case. I decided to not just simply swap the motherboard / CPU / RAM, but significantly cleaned up the internals.
There was tons of cat fur inside everywhere in the case, some of which likely from cats that are no longer alive, to the extent that spraying compressed air wasn't enough. Fortunately, the case makes it very easy to remove devices. All but one of the six front 5.25 bays are removable with just clips, no screws. The other one just needs a single screw. The five internal 3.5 trays just slide out, again, screwlessly.
I took out all the drives in all of the bays. Of course, removing drives meant removing all the cable connections also, for both power and data.
The 14 SATA devices were almost all powered by Molex to SATA power adapters, for some reason. I'm trying to remember why I did it this way. Only thing I could think of was that modular PSU SATA cables might not have been long enough to reach all the drives. But it still doesn't explain it. I was able to eliminate the use of every single Molex to SATA power adapter, by just using all four modular SATA cables, that each offer four power connections. The dual drive icy dock only uses a single SATA power connector to power two drives, so 13 out of the 16 are in use, leaving 3 still available. The Corsair modular PSU cables are all black, which matches the case color - much better than the molex to SATA adapters which have 4 different color wires, and white connectors on one end.
For the data cables, I switched all 6 SATA data cables to the same color cables also, black. And used all latching connectors, with right angle downward connectors on the 4 SATA optical drives. I looped SATA data cables through multiple holes in the case, so that they don't stick out when trying to close to the right side of the case. I used black cable ties to attach them, also.
I reordered the 4 optical drives from the top to match the SATA port numbers, on the motherboard, so the first drive on top is port 1, second drive below is port 2, etc. Below that is the and reader. And then finally, the SATA dock hooked up to the last two SATA ports.
For the 8 internal SATA SSDs, the two mini SAS cables are black on the HBA side, and blue on the SATA side. The wire on the SATA side is much thinner and more flexible than SATA cables, making it much easier to do cable management.
One of the biggest issues of the night was that the tiny M2 screw that was holding my NVMe SSD to the old motherboard was not compatible with the hole in the new motherboard. It was too small of a screw. I checked dozens of screws of various sizes from my bags. None fit, all were too large. I remembered buying an M2 screw kit from Amazon last year. Sure enough, it was in my order history. But I just couldn't find it. Finally, a few google searches revealed that the M2 screws are actually specific to each motherboard and normally come bundled with the motherboard. I checked the box, and there was a bad with 4 tiny screws. One of them fit ! I installed the SSD in the M2_1 slot, the one which didn't have a heatsink on it, between the top x16 slot and the CPU.
I dealt with case wire also. I folded and tied back the front eSATA and Firewire cables that aren't connected to anything, so that they aren't left dangling randomly.
I tied all the case wires in a much better way, again routing them through holes on the case, and tying them to various parts of the case along the way. Those case wires and fan wires are the main remaining ones that aren't black, except for the USB 3.0 card reader cable, which is blue.
I connected both the front USB 2.0 headers, so all 4 USB ports built-in to the case work. HD audio is also connected, though I will probably never use any audio connectors from the front case.
I used cleaning wipes on all the case fan blades. The list is 140mm top rear fan, 230mm top fan, 230mm from bottom front fan. I had to take out the front fan to clean it. I broke the switch to turn the LEDs on/off in the process. Cooler Master no longer makes the Megaflow 230, and there are very few 230mm cases left on the market to replace it with. I found an old Megaflow 230 in my spare parts drawer. I checked it out, and the LEDs were very dim, nearly dead. So, I decided to leave it in the drawer. The LEDs in the front won't be missed too much, as they could barely be seen through the front grill. If Noctua made a 230mm model, I would replace it with that, but as it stands, I'll leave it as is.
I washed the front grill in water also, as compressed air just couldn't get rid of all the cat fur. Even then, there are still traces of it.
For the top fan, the wire was a bit loose and this was causing vibrations. I used a (black) fan extension cable to run it through case holes, and tied it to the case, creating some tension, and eliminating the vibrations.
For the top rear fan, the wire used to be loose and could touch the blades of the fans on the CPU cooler. It was also in the way of the top PCIe slot. I used another black fan extension cable to run it more cleanly around some of the heatsinks.
The 4th case fan is a Noctua NF-A20 200mm fan attached to the side panel. Its blades were nearly pristine when I wiped them. It is only 2.5 years old, though, while some of the other fans may be as much as 9 years old.
As far as PCIe slot arrangements, I decided to put the Aquantia NIC in the bottom PCIEX16_3 slot, which is actually an x4 slot. The RTX 3060Ti GPU covers the next two slots, x1 and x16. After that, I have the Firewire x1 card, Hauppauge x1 card, and finally the LSI x8 card.
Unfortunately, while the motherboard has 6 slots, one of the x1 slots is always covered by the GPU, unless one uses a single slot GPU. This means I would not be able to add another two SATA ports using PCIe to fill the remaining 3.5 internal tray with two additional Samsung 860 1TB SSDs, to expand my array from 8 to 10TB. It looks like the main way to connect an additional two SATA SSDs would be to use the internal USB 10 Gbps header with a SATA bridge, or use the M2_2 header with an adapter for multiple SATA, as another posted suggested in this thread earlier. I read reviews of a J-Micron model on Amazon, though, and sure enough, someone mentioned using this to add SATA HDDs for his ZFS array, and having the M2 adapter melt and destroy the M2 port on the motherboard. As I mentioned before, J-Micron is not a name I would trust. I wouldn't be using all 5 ports on that adapter, but even with two fast SSDs, the J-Micron adapter might still overheat. Maybe someone else makes a better M2 to SATA adapter ?
The M2_2 header is under the GPU, also, so I couldn't use it for SATA drives even with a good adapter. I would have to move the NVMe SSD to the M2_2 port under the GPU, which might not be optimal for heat, especially if I switch to a PCIe 4.0 SSD in the future.
Asus recommends putting the GPU in the top slot for performance reasons. However, I really don't want to run into the locked card issue again, requiring removal of the massive Noctua heatsink to remove the GPU. So, I left the GPU in the middle slot, where it's much easier to access to unlock mechanism.
I still had to use a modular cable to 4 Molex cable from the PSU just to power the Firewire card via Molex. Apparently, SATA to molex adapters exist to accomplish what's needed here, which is the reverse of the usual Molex to SATA.
https://www.amazon.com/12in-Molex-Power ... B00GK8SYCW .
Since there are 3 SATA power connectors still free, using this would allow removing one modular cable from the PSU, and make a little more spcae in the case. The adapter cable is not black, though, unlike the Corsair modular cable, so I will not be purchasing the adapter.
Regarding the third 4-pin CPU power connector on the motherboard, I was confused because Corsair only bundles two 8-pin power cables. Turns out those 8-pin cables can be split into two 4-pin as needed.
https://help.corsair.com/hc/en-us/artic ... n-CPU-port
So, I now have both the 8-pin and 4-pin power cables connected on the motherboard. Heavy overclocking, here I come.
Hardware wise, the only thing still not working with the build at this point is the top (3.5in) slot of my Icy dock MB971SP-B internal dock. This is not a new problem with this build, though. Technically, the issue is that dock intermittently will only power the bottom slot, but not the top one. Either a short, or an issue with the power switch for the top slot.
There are still a few connectors on the motherboards that are not in use. I will have to think of ways to remedy that this serious problem
Those are the temperature sesnsor connector, water pump, addressable gen2 connectors, USB 3.2 gen2 header, Aura RGB headers, Node connector, TPM, 8-pin power plug LED.
Good news, I haven't had any more issues with the GPU not powering up as I did in the other case. So, I shouldn't have to undo this build.
One issue that caused me to reopen the case is that I had mixed up the Aquantia NICs I used. Changing the DHCP entry on the Comcast router is a PITA. And changing the MAC address in the entries for WOL software on phone and PCs in multiple places is work, too. It was just easier to physically swap the NIC back to the original one. Certainly one advantage of not relying on motherboard Ethernet, though, of course, you can technically set any MAC address you want in the driver settings and don't have to use the hardwired one.
I am really happy with the new build. It looks better on the inside than any computer I ever built, considering there are so many devices and cables in it. And yes, I'll be posting some pictures.
Worst news of the (very long) night - when I removed the old motherboard, I decided to remove the NH-D14 Noctua cooler, and also the CPU, so I could sell them separately. The cooler was very easy. When I pulled the 5820k CPU out of it, one of the pins on the motherboard just disintegrated before my eyes. At least, one that I saw. I have no idea if it's a pin that's actually used for something. I think maybe running 6 years at the max stable overclock I could find, which took literally weeks of running Prime95, might have caused the motherboard pins to become brittle. I have not tried to put the CPU back in and do a post test to see if the board still boots up or not. Even if it does, it's hard to say if problems would show up later during usage. So, I may not be able to get much for the used X99A Raider motherboard, as I have sell it as is. I should just have left the CPU on it and sold it as bundle. Sigh. I really hate the LGA designs. Pin issues force replacement of the motherboard, which is a lot of work, whereas replacing a bad CPU with broken pins is a lot less work. Of course, the CPU may cost more than the motherboard sometimes. That's not usually been the case for the PCs have built, except the ones that had dual CPUs (dual Pentium 1st generation, dual Athlon MP). In those cases, the cost of the motherboard was roughly equal to the cost of the pair of chips, as I recall. With the advent of multi-core CPUs, I have stopped considering the use of multi-CPU machines at home.