Discussion:
[gentoo-user] Computer case for new build
(too old to reply)
Dale
2023-09-18 05:20:02 UTC
Permalink
Howdy,

This is a work in progress and may take some time, financially if
nothing else.  With hindsight, I wish I had done this before the price
of everything went up but some things are getting more reasonable. My
first task, a case.  At this point, I may build a new system in the new
case, or, I might build the new system in my current case.  It depends
on which case I buy.  A cube shape wouldn't work for my main system.  It
would take up to much space, it would however make a great NAS box.  My
current case is a Cooler Master HAF-932 with those huge 200mm fans.  It
has a top fan and that thing removes a lot of warm air.  A top fan
really improves heat removal.  After all, heat naturally rises.  So any
case that has a top fan gets extra points with me. 

I've found a few cases that peak my interest depending on which way I go
with this.  One I found that has a lot of hard drive space and would
make a descent NAS box, the Fractal Design Node 804.  It's a cube shaped
thing but can hold a LOT of spinning rust. 10 drives plus I think space
for a SSD for the OS as well.  10, 16 or 18TB drives, is a lot of space,
even if in a RAID setup.  I might add, the price ain't bad either,
cheaper than some full tower type cases.  It also has space for 10
fans.  That includes several top ones. The downside, only micro ATX and
mini ITX mobo.  This is a serious down vote here.  I was hoping to turn
my current rig into a NAS.  The mobo and such parts.  This won't be a
option with this case.  Otherwise, it gives ideas on what I'm looking
for.  And not.  ;-)

Another find.  The Fractal Design Define 7 XL.  This is more of a tower
type shape like my current rig.  I think I read with extra trays, it can
hold up to 18 drives.  One could have a fancy RAID setup and still have
huge storage space with that.  I think it also has SSD spots for drives
that could hold the OS itself.  This one is quite pricey tho.  Still,
that is a LOT of drives. This one has space for 9 140mm fans including a
top one.  As far as drive capacity, this would make the best NAS box. 
It also takes a range of mobo types. 

Another find.  The Gamemax Master M905.  This one also holds about 10
hard drives, plus space for a SSD for the OS.  It is also a tower type. 
If my count is right, it has space for 4 120mm fans.  None at the top
tho.  One could cut a hole and add a fan tho.  I guess.  This one also
takes several mobo types. 

Keep in mind, my current case has those huge 200mm fans.  Quiet as a
mouse but moves a lot of air.  I sit right next to this thing and I
can't really hear anything from it.  Despite having 9 hard drives in it,
still quiet.  Also, everything runs really cool.  Even the CPU runs
fairly cool.  When compiling and the CPU is at max for extended periods
of time, the most I've ever seen is like 125F and I forgot and left the
A/C off for a bit so the room was warm.  It usually runs at most about
122F or so.  The speed control for the fans is really good.  At idle as
I type, 113F.  It doesn't vary a whole lot from idle to maxed out.  The
fan speeds is what mostly changes. 

Some of you get to see a lot of different systems.  Some of you also are
familiar with many different distros as well, which is why I ask about
other distros sometimes.  Someone may even have one of the ones I listed
above and can share info first hand.  I figure someone out there may
know of a case that is something close to the ones listed above that
I've missed but could be a nice fit.  Those should give you a idea what
I'm looking for.  To be honest, I kinda like the Fractal Design Define 7
XL right now despite the higher cost.  I could make a NAS/backup box
with it and I doubt I'd run out of drive space even if I started using
RAID and mirrored everything, at a minimum.  9 pairs of say 18TB drives
would give around 145TBs of storage with a file system on it.  That's a
LOT of space.  Videos and such are getting quite large nowadays but I
doubt I'll ever go above a 40" TV so even 1080p videos don't have to
have a huge bit rate.  Someone with a huge 60" TV or larger might need
more.  I don't.  That amount of storage space would hold a LOT of
videos, youtube and anything else I could find.  I could build my new
system in my current case.  After all, I like it just fine.  It's still
fully functional plus I like how the USB ports are in front instead of
on top collecting dust, dirt and such.  However, if I pick the Define
7XL, that would likely become a NAS/backup box.  The guts of current rig
would go in it and new rig would go in my old Cooler Master case. 

So, anyone think they know of a case that might beat those, especially
the Define 7, and still have a price that is reasonable?  I also look at
cooling fans.  I really like how cool my current case stays.  Also, I'm
in the USA so needs to be available here.  I mostly use Ebay but
sometimes use Amazon.  I've also used Newegg, Tigerdirect and a couple
other sites as well.  If you live outside the USA, brand and model will
do and I can look it up to see if it is available here or not. 

Oh, I don't care much for all the fancy lighting stuff.  I tend to
disable that pretty fast.  If you can find one that is rather plain,
bonus points.  I also air cool, no water cooling. 

Ideas?  Links? 

Dale

:-)  :-) 

P. S.  I found out how to make long runs to outbuildings too.  They make
a thing that converts ethernet to fiber and vice versa.  Run the long
part over fiber, up to miles in some cases, then put a fiber to ethernet
converter thingy on the other end.  From what I've read and seen on
youtube, the ethernet tends to be the limiting factor.  Neato.  :-D 
Frank Steinmetzger
2023-09-18 10:20:01 UTC
Permalink
Post by Dale
Howdy,
[
]
I've found a few cases that peak my interest depending on which way I go
with this.  One I found that has a lot of hard drive space and would
make a descent NAS box, the Fractal Design Node 804.  It's a cube shaped
thing but can hold a LOT of spinning rust. 10 drives plus I think space
for a SSD for the OS as well.
These days you can always put your OS on an NVMe; faster access and two
fewer cables in the case (or one more slot for a data drive).
Post by Dale
[
]
The downside, only micro ATX and
mini ITX mobo.  This is a serious down vote here.
Why is that bad? µATX comes with up to four PCIe slots. Even for ten drives,
you only need one SATA expander (with four or six on-board). Perhaps a fast
network card if one is needed, that makes two slots. You don’t get more RAM
slots with ATX, either. And, if not anything else, a smaller board means
(or can mean) lower power consumption and thus less heat.

Speaking of RAM; might I interest you in server-grade hardware? The reason
being that you can then use ECC memory, which is a nice perk for storage.¹
Also, the chance is higher to get sufficient SATA connectors on-board (maybe
in the form of an SFF connector, which is actually good, since it means
reduced “cable salad”).
AFAIK if you have a Ryzen PRO, then you can also use a consumer-grade board,
because they too support ECC. And DDR5 has basic (meaning 1 bit and
transparent to the OS) ECC built-in from the start.
Post by Dale
I was hoping to turn
my current rig into a NAS.  The mobo and such parts.  This won't be a
option with this case.  Otherwise, it gives ideas on what I'm looking
for.  And not.  ;-)
I was going to upgrade my 9 years old Haswell system at some point to a new
Ryzen build. Have been looking around for parts and configs for perhaps two
years now but I can’t decide (perhaps some remember previous ramblings about
that). Now I actually consider buing a tiny Deskmini X300 after I found out
that it does support ACPI S3, but only with a specific UEFI version. No
10-gig USB and only 1-gig ethernet though. But it’s cute and small. :)
Post by Dale
Another find.  The Fractal Design Define 7 XL.  This is more of a tower
type shape like my current rig.  I think I read with extra trays, it can
hold up to 18 drives.  One could have a fancy RAID setup and still have
huge storage space with that.  I think it also has SSD spots for drives
that could hold the OS itself.  This one is quite pricey tho.
With so many drives, you should also include a pricey power supply. And/or a
server board which supports staggered spin-up. Also, drives of the home NAS
category (and consumer drives anyways) are only certified for operation in
groups of up to 8-ish. Anything above and you sail in grey warranty waters.
Higher-tier drives are specced for the vibrations of so many drives (at
least I hope, because that’s what they™ tell us).
Post by Dale
To be honest, I kinda like the Fractal Design Define 7
XL right now despite the higher cost.  I could make a NAS/backup box
with it and I doubt I'd run out of drive space even if I started using
RAID and mirrored everything, at a minimum.
With 12 drives, I would go for parity RAID with two parity drives per six
drives, not for a mirror. That way you get 2/3 storage efficiency vs. 1/2
and more robustness; in parity, any two drives may fail, but in a cluster of
mirrors, only specific drives may fail (not two of the same mirror). If the
drives are huge, nine drives with three parity drives may be even better
(because rebuilds get scarier the bigger the drives get).
Post by Dale
9 pairs of say 18TB drives
would give around 145TBs of storage with a file system on it.
If you mirrored them all, you’d get 147 TiB. But as I said, use nine drives
with a 3-drive parity and you get 98 TiB per group. With two groups
(totalling 18 drives), you get 196 TiB. Wheeee!


¹ There was once a time when ECC was supported by all boards and CPUs. But
then someone invented market segmentation to increase profits through
upselling.
--
GrÌße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Skype jokes are oftentimes not understood, even when they’re repeated.
Rich Freeman
2023-09-18 11:20:01 UTC
Permalink
Post by Dale
[…]
The downside, only micro ATX and
mini ITX mobo. This is a serious down vote here.
Why is that bad? µATX comes with up to four PCIe slots. Even for ten drives,
you only need one SATA expander (with four or six on-board). Perhaps a fast
network card if one is needed, that makes two slots.
Tend to agree. The other factor here is that desktop-oriented CPUs
tend to not have a large number of PCIe lanes free for expansion
slots, especially if you want 1-2 NVMe slots. (You also have to watch
out as the lanes for those can be shared with some of the expansion
slots so you can't use both.)

If you want to consider a 10GbE+ card I'd definitely get something
with integrated graphics, because a NIC is going to need a 4-8x port
most likely (maybe there are expensive ones that use later generations
and fewer lanes). On most motherboards you may only get one slot with
that kind of bandwidth.
Speaking of RAM; might I interest you in server-grade hardware? The reason
being that you can then use ECC memory, which is a nice perk for storage.
That and way more PCIe lanes. That said, it seems super-expensive,
both in terms of dollars, and power use. Is there any entry point
into server-grade hardware that is reasonably priced, and which can
idle at something reasonable (certainly under 50W)?
Post by Dale
I was hoping to turn
my current rig into a NAS. The mobo and such parts. This won't be a
option with this case. Otherwise, it gives ideas on what I'm looking
for. And not. ;-)
I was going to upgrade my 9 years old Haswell system at some point to a new
Ryzen build. Have been looking around for parts and configs for perhaps two
years now but I can’t decide (perhaps some remember previous ramblings about
that).
The latest zen generation is VERY nice, but also pretty darn
expensive. Going back to zen3 might get you more for the money,
depending on how big you're scaling up. A big part of the cost of
zen4 is the motherboard, so if you're building something very high end
where the CPU+RAM dominates, then zen4 may be a better buy. If you
just want a low-core system then you're paying a lot just to get
started.

RE NAS: I used to build big boxes with lots of drives on ZFS. These
days I'm using distributed filesystems (I've migrated from MooseFS to
Ceph, though both have their advantages). The advantage of
distributed filesystems is that you can build them out of a bunch of
cheap boxes, vs trying to find one box that you can cram a dozen hard
drives into. They're just much easier to expand. Plus you get
host-level redundancy. Ceph is better for HA - I can literally reboot
every host in my network (one at a time) and all my essential services
stay running. MooseFS performs much better at small scale on hard
drives, but depends on a master node for the FOSS version, so if that
goes down the cluster is down (the locking behavior also seems to have
issues - I've had corruption issues with sqllite and such with it).

When you start getting up to a dozen drives the cost of getting them
to all work on a single host starts going up. You need big cases,
expansion cards, etc. Then when something breaks you need to find a
replacement quickly from a limited pool of options. If I lose a node
on my Rook cluster I can just go to newegg and look at $150 used SFF
PCs, then install the OS and join the cluster and edit a few lines of
YAML and the disks are getting formatted...
¹ There was once a time when ECC was supported by all boards and CPUs. But
then someone invented market segmentation to increase profits through
upselling.
Yeah, zen1 used to support ECC on most motherboards. Then later
motherboards dropped support. Definitely a case of market
segmentation.

This is part of why I like storage implementations that have more
robustness built into the software. Granted, it is still only as good
as your clients, but with distributed storage I really don't want to
be paying for ECC on all of my nodes. If the client calculates a
checksum and it remains independent of the data, then any RAM
corruption should be detectable as a mismatch (that of course assumes
the checksum is preserved and not re-calculated at any point).
--
Rich
Frank Steinmetzger
2023-09-18 13:10:01 UTC
Permalink
Post by Rich Freeman
Post by Frank Steinmetzger
Post by Dale
[
]
The downside, only micro ATX and
mini ITX mobo. This is a serious down vote here.
Why is that bad? µATX comes with up to four PCIe slots. Even for ten drives,
you only need one SATA expander (with four or six on-board). Perhaps a fast
network card if one is needed, that makes two slots.
Tend to agree. The other factor here is that desktop-oriented CPUs
tend to not have a large number of PCIe lanes free for expansion
slots, especially if you want 1-2 NVMe slots. (You also have to watch
out as the lanes for those can be shared with some of the expansion
slots so you can't use both.)
If you want to consider a 10GbE+ card I'd definitely get something
with integrated graphics,
That is a recommendation in any case. If you are a gamer, you have a
fallback in case the GPU kicks the bucket. And if not, your power bill goes
way down.
Post by Rich Freeman
because a NIC is going to need a 4-8x port
most likely
Really? PCIe 3.0 has 1 GB/s/lane, that is 8 Gbps/lane, so almost as much as
10 GbE. OTOH, 10 GbE is a major power sink. Granted, 1 GbE is not much when
you’re dealing with numerous TB. And then there is network over thunderbolt,
of which I only recently learned. But this is probably very restricted in
length. Which will also be the case for 10 GbE, so probably no options for
the outhouse. :D
Post by Rich Freeman
Post by Frank Steinmetzger
Speaking of RAM; might I interest you in server-grade hardware? The reason
being that you can then use ECC memory, which is a nice perk for storage.
That and way more PCIe lanes. That said, it seems super-expensive,
both in terms of dollars, and power use. Is there any entry point
into server-grade hardware that is reasonably priced, and which can
idle at something reasonable (certainly under 50W)?
I have a four-bay NAS with server board (ASRock Rack E3C224D2I), actually my
last surviving Gentoo system. ;-) With IPMI-Chip (which alone takes several
watts), 16 GiB DDR3-ECC, an i3-4170 and 4×6 TB, it draws around 33..35 W
from the plug at idle — that is after I enabled all powersaving items in
powertop. Without them, it is around 10 W more. It has two gigabit ports
(plus IPMI port) and a 300 W 80+ gold PSU.
Post by Rich Freeman
Post by Frank Steinmetzger
I was going to upgrade my 9 years old Haswell system at some point to a new
Ryzen build. Have been looking around for parts and configs for perhaps two
years now but I can’t decide (perhaps some remember previous ramblings about
that).
The latest zen generation is VERY nice, but also pretty darn
expensive. Going back to zen3 might get you more for the money,
depending on how big you're scaling up.
I’ve been looking at Zen 3 the whole time, namely the 5700G APU. 5 times the
performance of my i5, for less power, and good graphics performance for the
occasional game. I’m a bit paranoid re. Zen 4’s inclusion of Microsoft
Pluton (“Chip-to-Cloud security”) and Zen 4 in gereral has higher idle
consumption. But now that Phoenix, the Zen 4 successor to the 5700G, is
about to become available, I am again hesitant to pull the trigger, waiting
for the pricetag.
Post by Rich Freeman
A big part of the cost of
zen4 is the motherboard, so if you're building something very high end
where the CPU+RAM dominates, then zen4 may be a better buy.
I’m fine with middle-class. In fact I always thought i7s to be overpriced
compared to i5s. The plus in performance of top-tier parts is usually bought
with disproportionately high power consumption (meaning heat and noise).
Post by Rich Freeman
If you just want a low-core system then you're paying a lot just to get
started.
I want to get the best bang within my constraints, meaning the 5700G (
8 cores). The 5600G (6 cores) is much cheaper, but I want to get the best
graphics I can get in an APU. And I am always irked by having 6 cores (12
threads), because it’s not a power of 2, so percentages in load graphs will
look skewed. :D
Post by Rich Freeman
The advantage of
distributed filesystems is that you can build them out of a bunch of
cheap boxes [
]
When you start getting up to a dozen drives the cost of getting them
to all work on a single host starts going up. You need big cases,
expansion cards, etc. Then when something breaks you need to find a
replacement quickly from a limited pool of options. If I lose a node
on my Rook cluster I can just go to newegg and look at $150 used SFF
PCs, then install the OS and join the cluster and edit a few lines of
YAML and the disks are getting formatted...
For a simple media storage, I personally would find this too cumbersome to
manage. Especially if you stick to Gentoo and don’t have a homogeneous
device pool (not to mention compile times). I’d choose organisational
simplicity over hardware availability. (My NAS isn’t running for of the
time, mostly due to power bill, but also to keep the hardware alive for
longer.)
--
GrÌße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Talents find solutions, geniuses discover problems.
Rich Freeman
2023-09-18 19:00:01 UTC
Permalink
Post by Frank Steinmetzger
Post by Rich Freeman
because a NIC is going to need a 4-8x port
most likely
Really? PCIe 3.0 has 1 GB/s/lane, that is 8 Gbps/lane, so almost as much as
10 GbE.
I can't find any 10GbE NICs that use a 1x slot - if you can I'll be
impressed. In theory somebody could probably make one that uses PCIe
v4/5 or so, but I'm not seeing one today.

If it needs more than a 1x slot, then it is all moot after that, as
most consumer motherboards tend to have 1x slots, a 16x slot, and
MAYBE a 4x slot in a 16x physical form. Oh, and good luck finding
boards with an open end on the slot, even if there would be room to
let a card dangle.

My point with micro ATX was that with consumer CPUs having so few
lanes available having room for more slots wouldn't help, as there
wouldn't be lanes available to connect to them, unless you added a
switch. That's something else which is really rare on motherboards.
I don't get why they charge $250 for an AM5 motherboard, and maybe
even have a switch on the X series ones, but they can't be bothered to
give you larger slots. I can't imagine that all the lanes are busy
all the time, so a switch would probably help quite a bit.
Post by Frank Steinmetzger
this is probably very restricted in
length. Which will also be the case for 10 GbE, so probably no options for
the outhouse. :D
With an SFP+ port you can just use fiber and go considerable
distances. That's assuming you're running network to your outhouse,
and not bothering to put a switch in there (which would be more
logical).
Post by Frank Steinmetzger
I have a four-bay NAS with server board (ASRock Rack E3C224D2I), actually my
last surviving Gentoo system. ;-) With IPMI-Chip (which alone takes several
watts), 16 GiB DDR3-ECC, an i3-4170 and 4×6 TB, it draws around 33..35 W
from the plug at idle — that is after I enabled all powersaving items in
powertop. Without them, it is around 10 W more. It has two gigabit ports
(plus IPMI port) and a 300 W 80+ gold PSU.
That's an ITX system though, and a very old one at that. Not sure how
useful more PCIe lanes are in a form factor like that.
Post by Frank Steinmetzger
Post by Rich Freeman
The advantage of
distributed filesystems is that you can build them out of a bunch of
cheap boxes […]
For a simple media storage, I personally would find this too cumbersome to
manage. Especially if you stick to Gentoo and don’t have a homogeneous
device pool (not to mention compile times).
I don't generally use Gentoo just to run containers. On a k8s box the
box itself basically does nothing but run k8s. I probably only run
about 5 commands to provision one from bare metal. :)
--
Rich
Frank Steinmetzger
2023-09-18 19:30:01 UTC
Permalink
Post by Rich Freeman
Post by Frank Steinmetzger
I have a four-bay NAS with server board (ASRock Rack E3C224D2I), actually my
last surviving Gentoo system. ;-) With IPMI-Chip (which alone takes several
watts), 16 GiB DDR3-ECC, an i3-4170 and 4×6 TB, it draws around 33..35 W
from the plug at idle — that is after I enabled all powersaving items in
powertop. Without them, it is around 10 W more. It has two gigabit ports
(plus IPMI port) and a 300 W 80+ gold PSU.
That's an ITX system though, and a very old one at that.
Well, you asked for entry-point server hardware with low idle consumption.
;-)

I built it in November 2016. Even then it was old componentry, but I wanted
to save €€€ and it was enough for my needs. I installed a Celeron G1840 for
33 € because I thought it would be enough. I tested its AES performance
beforehand (because it didn’t have AES-NI) and with 155 MB/s it was enough
to saturate GbE. But since I ran ZFS on LUKS at the time (still do, until I
change the setup for more capacity), I ran into a bottleneck during scrubs.
So after a year, I paid over 100 € for the i3 which I should have bought
from the get-go. :-/
Post by Rich Freeman
Not sure how
useful more PCIe lanes are in a form factor like that.
Modern boards might come with NVMe slots that can be re-purposed for
external cards.
--
GrÌße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

The more cheese, the more holes. The more holes, the less cheese.
Ergo: the more cheese, the less cheese!
Wols Lists
2023-09-19 08:20:01 UTC
Permalink
Post by Rich Freeman
This is part of why I like storage implementations that have more
robustness built into the software. Granted, it is still only as good
as your clients, but with distributed storage I really don't want to
be paying for ECC on all of my nodes. If the client calculates a
checksum and it remains independent of the data, then any RAM
corruption should be detectable as a mismatch (that of course assumes
the checksum is preserved and not re-calculated at any point).
Which is why I run raid-5 over dm-integrity. I'm not sure it's that
stable :-( :-( but it means any disk corruption will get picked up at
the integrity level, and raid-5 will just get a read error which will
trigger a parity recalc without data loss.

Cheers,
Wol
Dale
2023-09-18 19:30:01 UTC
Permalink
Post by Frank Steinmetzger
Post by Dale
Howdy,
[…]
I've found a few cases that peak my interest depending on which way I go
with this.  One I found that has a lot of hard drive space and would
make a descent NAS box, the Fractal Design Node 804.  It's a cube shaped
thing but can hold a LOT of spinning rust. 10 drives plus I think space
for a SSD for the OS as well.
These days you can always put your OS on an NVMe; faster access and two
fewer cables in the case (or one more slot for a data drive).
Well, I already have one SSD, sitting in the box in my safe.  I was
going to move the OS on my current rig but just haven't had time and
energy to fool with it.  I plan to move the current mobo and such to the
NAS box.  It doesn't have anything but the SSD as a option.  The new
build, it will likely have a NVMe thingy tho.  That would be a good idea
for it.  Then put SSD in what is my current rig but moved to NAS box. 
Hmmm, there's a thought. 
Post by Frank Steinmetzger
Post by Dale
[…]
The downside, only micro ATX and
mini ITX mobo.  This is a serious down vote here.
Why is that bad? µATX comes with up to four PCIe slots. Even for ten drives,
you only need one SATA expander (with four or six on-board). Perhaps a fast
network card if one is needed, that makes two slots. You don’t get more RAM
slots with ATX, either. And, if not anything else, a smaller board means
(or can mean) lower power consumption and thus less heat.
Speaking of RAM; might I interest you in server-grade hardware? The reason
being that you can then use ECC memory, which is a nice perk for storage.¹
Also, the chance is higher to get sufficient SATA connectors on-board (maybe
in the form of an SFF connector, which is actually good, since it means
reduced “cable salad”).
AFAIK if you have a Ryzen PRO, then you can also use a consumer-grade board,
because they too support ECC. And DDR5 has basic (meaning 1 bit and
transparent to the OS) ECC built-in from the start.
I tend to need quite a few PCIe slots.  I like to have my own video
card.  I never liked the built in ones.  I also have never had a good
built in network port to work right either.  Every one of them always
had problems if they worked at all.  PIC(e) network cards have always
worked great.  I also need PCIe slots for SATA expander cards.  If I use
the Define case, I'd like to spread that across at least two cards,
maybe three.  So, network, video and at least a couple SATA cards,
adding up fast.  Sometimes, I wouldn't mind having the larger ATX with
extra PCIe slots.  Thought about having SAS cards and cables that
convert to SATA.  I think they do that.  That may make it just one
card.  I dunno.  I haven't dug deep into that yet.  Figure the case is a
good place to start.  Mobo, CPU and such next.  Figure mobo will pick
memory for me since usually only one or two will work anyway. 
Post by Frank Steinmetzger
Post by Dale
I was hoping to turn
my current rig into a NAS.  The mobo and such parts.  This won't be a
option with this case.  Otherwise, it gives ideas on what I'm looking
for.  And not.  ;-)
I was going to upgrade my 9 years old Haswell system at some point to a new
Ryzen build. Have been looking around for parts and configs for perhaps two
years now but I can’t decide (perhaps some remember previous ramblings about
that). Now I actually consider buing a tiny Deskmini X300 after I found out
that it does support ACPI S3, but only with a specific UEFI version. No
10-gig USB and only 1-gig ethernet though. But it’s cute and small. :)
I thought about using a Raspberry Pi for a NAS box.  Just build more
than one of them.  Thing is, finding the parts for it is almost
impossible right now.  They kinda went away a couple years ago when
things got crazy. 
Post by Frank Steinmetzger
Post by Dale
Another find.  The Fractal Design Define 7 XL.  This is more of a tower
type shape like my current rig.  I think I read with extra trays, it can
hold up to 18 drives.  One could have a fancy RAID setup and still have
huge storage space with that.  I think it also has SSD spots for drives
that could hold the OS itself.  This one is quite pricey tho.
With so many drives, you should also include a pricey power supply. And/or a
server board which supports staggered spin-up. Also, drives of the home NAS
category (and consumer drives anyways) are only certified for operation in
groups of up to 8-ish. Anything above and you sail in grey warranty waters.
Higher-tier drives are specced for the vibrations of so many drives (at
least I hope, because that’s what they™ tell us).
I usually get a larger than needed power supply anyway.  I got a 650 or
700 watt in my current rig.  According to the UPS, I run less than 300
watts even when compiling and that includes the puter, monitor, router,
modem and a couple power supplies for external hard drives.  I tend to
plug rest into regular outlets, not for battery backup power.  I figure
the power for the puter itself is under 200 watts.  Still, I get extra
for spinning those things up at power on. 

I was looking at Corsair the other day.  I've also had good luck with
Thermaltake and EVGA brands, certain models anyway.  Some models of
those are the cheapos.  One has to avoid those.  That's a future thing
tho. 
Post by Frank Steinmetzger
Post by Dale
To be honest, I kinda like the Fractal Design Define 7
XL right now despite the higher cost.  I could make a NAS/backup box
with it and I doubt I'd run out of drive space even if I started using
RAID and mirrored everything, at a minimum.
With 12 drives, I would go for parity RAID with two parity drives per six
drives, not for a mirror. That way you get 2/3 storage efficiency vs. 1/2
and more robustness; in parity, any two drives may fail, but in a cluster of
mirrors, only specific drives may fail (not two of the same mirror). If the
drives are huge, nine drives with three parity drives may be even better
(because rebuilds get scarier the bigger the drives get).
Right now, my NAS box works more like a backup system.  I have it just
in case something happens to my main rig.  I call it a NAS because that
is the closest term that kinda fits.  At some point tho, I may have a
actual NAS box being used as a literal NAS box.  I likely really do need
to use RAID of some type but I tend to make backups as my fall back. 
Plus the cost of the drives is kinda steep.  Have to at least double
everything, maybe more.
Post by Frank Steinmetzger
Post by Dale
9 pairs of say 18TB drives
would give around 145TBs of storage with a file system on it.
If you mirrored them all, you’d get 147 TiB. But as I said, use nine drives
with a 3-drive parity and you get 98 TiB per group. With two groups
(totalling 18 drives), you get 196 TiB. Wheeee!
¹ There was once a time when ECC was supported by all boards and CPUs. But
then someone invented market segmentation to increase profits through
upselling.
ECC has been around long enough that the price may not increase by much
if at all.  Still, that's motherboard which will be the next thing. 
Case first. 

Rich, I read yours but waiting to see Frank's reply.  Good questions and
ideas tho.  ;-)

Since no one mentioned a better case, that Define thing may end up being
it.  That Gamemax is cheaper but a lot less drive capacity.  Heck, when
I bought my current case, which has space for five 3.5" and six 5 1/4"
drives, I thought I'd never fill up just the 3.5" ones.  Now, the 3.5"
ones have been full for a while and the 5 1/4" are about full too.  I
got one that I could remove.  It's a 750GB and I backup my email and run
my chroot from it.  Still, while the Gamemax would work for now, as a
backup/NAS box, I would likely outgrow it too.  Then I'd need yet
another case.  Dang this stuff gets pricey.  O_O 

Dale

:-)  :-) 
Frank Steinmetzger
2023-09-18 20:10:01 UTC
Permalink
Post by Frank Steinmetzger
Post by Dale
[
]
The downside, only micro ATX and
mini ITX mobo.  This is a serious down vote here.
Why is that bad? µATX comes with up to four PCIe slots. Even for ten drives,
you only need one SATA expander (with four or six on-board). Perhaps a fast
network card if one is needed, that makes two slots. You don’t get more RAM
slots with ATX, either. And, if not anything else, a smaller board means
(or can mean) lower power consumption and thus less heat.
Speaking of RAM; might I interest you in server-grade hardware? The reason
being that you can then use ECC memory, which is a nice perk for storage.¹
Also, the chance is higher to get sufficient SATA connectors on-board (maybe
in the form of an SFF connector, which is actually good, since it means
reduced “cable salad”).
AFAIK if you have a Ryzen PRO, then you can also use a consumer-grade board,
because they too support ECC. And DDR5 has basic (meaning 1 bit and
transparent to the OS) ECC built-in from the start.
I tend to need quite a few PCIe slots.  I like to have my own video
card.  I never liked the built in ones.
You’re just asking to be asked. ;-) Why don’t you like them? (I fear I may
have asked that before).

I get it when you wanna do it your way because it always worked™ (which is
not wrong — don’t misunderstand me) and perhaps you had some bad experience
in the past. OTOH it’s a pricey component usually only needed by gamers and
number crunchers. On-board graphics are just fine for Desktop and even
(very) light gaming and they lower power draw considerably. Give it a swirl,
maybe you like it. :) Both Intel and AMD work just fine with the kernel
drivers.
I also have never had a good built in network port to work right either. 
Every one of them always had problems if they worked at all.
I faintly remember a thread about that from long ago. But the same thought
applies: in case you buy a new board, give it a try. Keep away from Intel
I225-V though, that 2.5 GbE chip has a design flaw but manufacturers still
use int.
I also need PCIe slots for SATA expander cards.
That’s the use case I mostly thought of. Irritatingly, I just looked at my
price comparison site for SATA expansion cards and all 8×SATA cards are PCIe
2.0 with either two or even just one lane. -_- So not even PCIe 3.0×1, which
is the same speed as 2.0×2 but would fit in a ×1 slot which many boards
have in abundance.

2.0×2 is about 1 GB/s. Divided by 8 drives gives you 125 MB/s/drive.
If I use
the Define case, I'd like to spread that across at least two cards,
maybe three.  So, network, video and at least a couple SATA cards,
adding up fast.  Sometimes, I wouldn't mind having the larger ATX with
extra PCIe slots.  Thought about having SAS cards and cables that
convert to SATA.  I think they do that.  That may make it just one
card.  I dunno.  I haven't dug deep into that yet.
After the disappointment with the SATA expanders I looked at SAS cards.
They are well connected on the PCIe side (2.0×8 or 3.0×8) and they are
compatible with SATA drives. I found an Intel SAS card with four SFF
connectors (meaning 16 drives!) for a little over 100 €. It’s called
RMSP3JD160J. I don’t know why it is so cheap, though. Because the
second-cheapest competitor is already at 190 €.
Figure the case is a
good place to start.  Mobo, CPU and such next.  Figure mobo will pick
memory for me since usually only one or two will work anyway. 
One or two what?
Post by Frank Steinmetzger
I was going to upgrade my 9 years old Haswell system at some point to a new
Ryzen build. Have been looking around for parts and configs for perhaps two
years now but I can’t decide (perhaps some remember previous ramblings about
that). Now I actually consider buing a tiny Deskmini X300 after I found out
that it does support ACPI S3, but only with a specific UEFI version. No
10-gig USB and only 1-gig ethernet though. But it’s cute and small. :)
I thought about using a Raspberry Pi for a NAS box.  Just build more
than one of them.  Thing is, finding the parts for it is almost
impossible right now.  They kinda went away a couple years ago when
things got crazy. 
I was talking main PC use case, not NAS. :)
The minimalist form factor doesn’t really impede me. I don’t have any HDDs
in my PC anymore (too noisy), so why keep space for it. And while I do like
to game a little bit, I find a full GPU too expensive and hungry, because it
will be bored most of the time.

The rest can be done with USB, which is the only thing a compact case often
lacks in numbers.
Since no one mentioned a better case, that Define thing may end up being
it.  That Gamemax is cheaper but a lot less drive capacity.  Heck, when
I bought my current case, which has space for five 3.5" and six 5 1/4"
drives, I thought I'd never fill up just the 3.5" ones.  Now, the 3.5"
ones have been full for a while and the 5 1/4" are about full too.
Full with ODDs? Or drive cages? You can get 3×3.5″ cages which install into
three 5Œ″ slots.
--
GrÌße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

If you switch off the lights fast enough,
you can see what the darkness looks like.
Dale
2023-09-18 23:50:01 UTC
Permalink
Post by Dale
Post by Dale
[…]
The downside, only micro ATX and
mini ITX mobo.  This is a serious down vote here.
Why is that bad? µATX comes with up to four PCIe slots. Even for ten drives,
you only need one SATA expander (with four or six on-board). Perhaps a fast
network card if one is needed, that makes two slots. You don’t get more RAM
slots with ATX, either. And, if not anything else, a smaller board means
(or can mean) lower power consumption and thus less heat.
Speaking of RAM; might I interest you in server-grade hardware? The reason
being that you can then use ECC memory, which is a nice perk for storage.¹
Also, the chance is higher to get sufficient SATA connectors on-board (maybe
in the form of an SFF connector, which is actually good, since it means
reduced “cable salad”).
AFAIK if you have a Ryzen PRO, then you can also use a consumer-grade board,
because they too support ECC. And DDR5 has basic (meaning 1 bit and
transparent to the OS) ECC built-in from the start.
I tend to need quite a few PCIe slots.  I like to have my own video
card.  I never liked the built in ones.
You’re just asking to be asked. ;-) Why don’t you like them? (I fear I may
have asked that before).
I get it when you wanna do it your way because it always worked™ (which is
not wrong — don’t misunderstand me) and perhaps you had some bad experience
in the past. OTOH it’s a pricey component usually only needed by gamers and
number crunchers. On-board graphics are just fine for Desktop and even
(very) light gaming and they lower power draw considerably. Give it a swirl,
maybe you like it. :) Both Intel and AMD work just fine with the kernel
drivers.
Well, for one, I usually upgrade the video card several times before I
upgrade the mobo.  When it is built in, not a option.  I think I'm on my
third in this rig.  I also need multiple outputs, two at least.  One for
monitor and one for TV.  My little NAS box I'm currently using is a Dell
something.  The video works but it has no GUI.  At times during the boot
up process, things don't scroll up the screen.  I may be missing a
setting somewhere but when it blanks out, it comes back with a different
resolution and font size.  I figure it is blanking during the switch. 
My Gentoo box doesn't do that.  I can see the screen from BIOS all the
way to when it finishes booting and the GUI comes up.  I'm one of those
who watches.  ;-)
Post by Dale
I also have never had a good built in network port to work right either. 
Every one of them always had problems if they worked at all.
I faintly remember a thread about that from long ago. But the same thought
applies: in case you buy a new board, give it a try. Keep away from Intel
I225-V though, that 2.5 GbE chip has a design flaw but manufacturers still
use int.
Post by Dale
I also need PCIe slots for SATA expander cards.
That’s the use case I mostly thought of. Irritatingly, I just looked at my
price comparison site for SATA expansion cards and all 8×SATA cards are PCIe
2.0 with either two or even just one lane. -_- So not even PCIe 3.0×1, which
is the same speed as 2.0×2 but would fit in a ×1 slot which many boards
have in abundance.
2.0×2 is about 1 GB/s. Divided by 8 drives gives you 125 MB/s/drive.
There's always going to be a bottle neck somewhere, I just try to
minimize it, if I can.  Plus, two cards, if one fails, at least I have a
2nd to play with.  I may can get one VG at a time up and running.
Post by Dale
If I use
the Define case, I'd like to spread that across at least two cards,
maybe three.  So, network, video and at least a couple SATA cards,
adding up fast.  Sometimes, I wouldn't mind having the larger ATX with
extra PCIe slots.  Thought about having SAS cards and cables that
convert to SATA.  I think they do that.  That may make it just one
card.  I dunno.  I haven't dug deep into that yet.
After the disappointment with the SATA expanders I looked at SAS cards.
They are well connected on the PCIe side (2.0×8 or 3.0×8) and they are
compatible with SATA drives. I found an Intel SAS card with four SFF
connectors (meaning 16 drives!) for a little over 100 €. It’s called
RMSP3JD160J. I don’t know why it is so cheap, though. Because the
second-cheapest competitor is already at 190 €.
I did a quick search, only one found and it is listed as parts or not
working.  Still, could lead me to more options tho.  It would likely be
a good idea to use SAS.  Plus, if I start buying SAS drives, I'm ready. 
I sometimes find a good deal on a SAS drive. 
Post by Dale
Figure the case is a
good place to start.  Mobo, CPU and such next.  Figure mobo will pick
memory for me since usually only one or two will work anyway. 
One or two what?
One or two types of memory.  Usually, plain or ECC.  Mobos usually are
usually pretty picky on their memory. 
Post by Dale
I was going to upgrade my 9 years old Haswell system at some point to a new
Ryzen build. Have been looking around for parts and configs for perhaps two
years now but I can’t decide (perhaps some remember previous ramblings about
that). Now I actually consider buing a tiny Deskmini X300 after I found out
that it does support ACPI S3, but only with a specific UEFI version. No
10-gig USB and only 1-gig ethernet though. But it’s cute and small. :)
I thought about using a Raspberry Pi for a NAS box.  Just build more
than one of them.  Thing is, finding the parts for it is almost
impossible right now.  They kinda went away a couple years ago when
things got crazy. 
I was talking main PC use case, not NAS. :)
The minimalist form factor doesn’t really impede me. I don’t have any HDDs
in my PC anymore (too noisy), so why keep space for it. And while I do like
to game a little bit, I find a full GPU too expensive and hungry, because it
will be bored most of the time.
The rest can be done with USB, which is the only thing a compact case often
lacks in numbers.
Post by Dale
Since no one mentioned a better case, that Define thing may end up being
it.  That Gamemax is cheaper but a lot less drive capacity.  Heck, when
I bought my current case, which has space for five 3.5" and six 5 1/4"
drives, I thought I'd never fill up just the 3.5" ones.  Now, the 3.5"
ones have been full for a while and the 5 1/4" are about full too.
Full with ODDs? Or drive cages? You can get 3×3.5″ cages which install into
three 5¼″ slots.
I've been looking into those.  I need to take the side off my case and
make some measurements to see if one or more of them will fit.  Right
now, I just got some 5 1/4" to 3.5" adapters to put in HDDs.  I do have
two ODDs installed tho.  One DVD burner and one Blu-ray burner.  I was
long ago thinking of backing up to Blu-ray.  Then I figured out how many
discs it would take, and time too.  That one failed after thinking about
it.  Still, I found a good deal and bought a blu-ray burner anyway. 
I've made a few discs for neighbors with stuff on it.

Most of this will likely be hammered out in future threads.  As it is,
going to save up for the case.  That's a chunk of change.  I think it
will be a nice case tho. 

Dale

:-)  :-) 
Frank Steinmetzger
2023-09-19 00:10:02 UTC
Permalink
Post by Dale
Post by Frank Steinmetzger
I tend to need quite a few PCIe slots.  I like to have my own video
card.  I never liked the built in ones.
You’re just asking to be asked. ;-) Why don’t you like them? (I fear I may
have asked that before).
I get it when you wanna do it your way because it always worked™ (which is
not wrong — don’t misunderstand me) and perhaps you had some bad experience
in the past. OTOH it’s a pricey component usually only needed by gamers and
number crunchers. On-board graphics are just fine for Desktop and even
(very) light gaming and they lower power draw considerably. Give it a swirl,
maybe you like it. :) Both Intel and AMD work just fine with the kernel
drivers.
Well, for one, I usually upgrade the video card several times before I
upgrade the mobo.  When it is built in, not a option.  I think I'm on my
third in this rig.
I also need multiple outputs, two at least.
That is not a problem with iGPUs. The only thing to consider is the type of
video connectors on the board. Most have two classical ones, some three,
divided among HDMI and DP. And the fancy ones use USB-C with DisplayPort
alternative mode. Also, dGPUs draw a lot more when using two displays.
Post by Dale
One for
monitor and one for TV.  My little NAS box I'm currently using is a Dell
something.  The video works but it has no GUI.  At times during the boot
up process, things don't scroll up the screen.  I may be missing a
setting somewhere but when it blanks out, it comes back with a different
resolution and font size.
In case you use Grub, it has an option to keep the UEFI video mode.
So there would be no switching if UEFI already starts with the proper
resolution.
Post by Dale
My Gentoo box doesn't do that.  I can see the screen from BIOS all the
way to when it finishes booting and the GUI comes up.  I'm one of those
who watches.  ;-)
Yeah, and it’s neat if there is no flickering or blanking. So modern and
clean.
Post by Dale
Post by Frank Steinmetzger
Figure the case is a
good place to start.  Mobo, CPU and such next.  Figure mobo will pick
memory for me since usually only one or two will work anyway. 
One or two what?
One or two types of memory.  Usually, plain or ECC.  Mobos usually are
usually pretty picky on their memory. 
Hm
 while I haven’t used that many different components in my life, so far
I have not had a system not accept any RAM. Just stick to the big names, I
guess.
Post by Dale
Post by Frank Steinmetzger
Since no one mentioned a better case, that Define thing may end up being
it.  That Gamemax is cheaper but a lot less drive capacity.  Heck, when
I bought my current case, which has space for five 3.5" and six 5 1/4"
drives, I thought I'd never fill up just the 3.5" ones.  Now, the 3.5"
ones have been full for a while and the 5 1/4" are about full too.
Full with ODDs? Or drive cages? You can get 3×3.5″ cages which install into
------------------------------------------------^^^^^

That should have been 5×3.5″. Too many threes and fives floatin’ around in
my head and it’s getting late.
--
GrÌße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

The majority of people have an above-average number of legs.
Dale
2023-09-19 06:10:01 UTC
Permalink
Post by Frank Steinmetzger
Post by Dale
Post by Dale
I tend to need quite a few PCIe slots.  I like to have my own video
card.  I never liked the built in ones.
You’re just asking to be asked. ;-) Why don’t you like them? (I fear I may
have asked that before).
I get it when you wanna do it your way because it always worked™ (which is
not wrong — don’t misunderstand me) and perhaps you had some bad experience
in the past. OTOH it’s a pricey component usually only needed by gamers and
number crunchers. On-board graphics are just fine for Desktop and even
(very) light gaming and they lower power draw considerably. Give it a swirl,
maybe you like it. :) Both Intel and AMD work just fine with the kernel
drivers.
Well, for one, I usually upgrade the video card several times before I
upgrade the mobo.  When it is built in, not a option.  I think I'm on my
third in this rig.
I also need multiple outputs, two at least.
That is not a problem with iGPUs. The only thing to consider is the type of
video connectors on the board. Most have two classical ones, some three,
divided among HDMI and DP. And the fancy ones use USB-C with DisplayPort
alternative mode. Also, dGPUs draw a lot more when using two displays.
They have added a lot of stuff to mobos since I bought one about a
decade ago.  Maybe things have improved.  I just like PCIe slots and
cards.  Gives me more options.  Given how things have changed tho, I may
have to give in on some things.  I just like my mobos to be like Linux. 
Have something do one thing and do it well.  When needed, change that
thing.  ;-) 
Post by Frank Steinmetzger
Post by Dale
One for
monitor and one for TV.  My little NAS box I'm currently using is a Dell
something.  The video works but it has no GUI.  At times during the boot
up process, things don't scroll up the screen.  I may be missing a
setting somewhere but when it blanks out, it comes back with a different
resolution and font size.
In case you use Grub, it has an option to keep the UEFI video mode.
So there would be no switching if UEFI already starts with the proper
resolution.
That rig is old.  Maybe 10 or 15 years old.  No UEFI on it.  Does use
grub tho.  I duckduckgo'd it and changed some settings but last time I
booted, it did all that blinky, blank stuff.  Sometimes, I wonder if it
is hung up or crashed.  Then it pops up again and lets me know it is
still booting.  Eventually, I'll remove the monitor completely.  Then it
either boots up or it doesn't.  I just ssh in, decrypt the drives, then
mount from my main rig and start my backups.  I might add, this new
setup with LVM, the backups started at about the end of a previous
thread last Wednesday I think.  It's still copying data to the new
backup.  It's up the files starting with a "M".  The ones starting with
"The" is pretty big.  It's gonna take a while.  Poor drives.  o_O
Post by Frank Steinmetzger
Post by Dale
My Gentoo box doesn't do that.  I can see the screen from BIOS all the
way to when it finishes booting and the GUI comes up.  I'm one of those
who watches.  ;-)
Yeah, and it’s neat if there is no flickering or blanking. So modern and
clean.
Post by Dale
Post by Dale
Figure the case is a
good place to start.  Mobo, CPU and such next.  Figure mobo will pick
memory for me since usually only one or two will work anyway. 
One or two what?
One or two types of memory.  Usually, plain or ECC.  Mobos usually are
usually pretty picky on their memory. 
Hm… while I haven’t used that many different components in my life, so far
I have not had a system not accept any RAM. Just stick to the big names, I
guess.
I think one of my rigs uses DDR, I think my main rig is DDR3.  I noticed
they are up to DDR5 now.  What I meant was if a mobo requires DDR4, that
is usually all it will take.  Nothing else will work.  Whatever the mobo
requires is what you use, just pick a good brand as you say. 
Post by Frank Steinmetzger
Post by Dale
Post by Dale
Since no one mentioned a better case, that Define thing may end up being
it.  That Gamemax is cheaper but a lot less drive capacity.  Heck, when
I bought my current case, which has space for five 3.5" and six 5 1/4"
drives, I thought I'd never fill up just the 3.5" ones.  Now, the 3.5"
ones have been full for a while and the 5 1/4" are about full too.
Full with ODDs? Or drive cages? You can get 3×3.5″ cages which install into
------------------------------------------------^^^^^
That should have been 5×3.5″. Too many threes and fives floatin’ around in
my head and it’s getting late.
Honestly, I read it the way you meant it.  lol  I've got about three
different kinds in my wish list.  Eventually, I'll take the side off and
see which one will work.  I also found one that I think can be used as a
external case.  It has a fan, power plug and eSATA connectors.  I think
it holds five drives.  If I get that, I just may scrap the setup I
currently have and have one large LVM setup and backup everything to one
set of drives.  It will also fit in my safe too. 

My brain is starting to hurt.  ROFL 

Dale

:-)  :-) 

P. S.  I just realized, I have a old Gigabyte mobo on the shelf.  It's a
old 870 I think.  I also have a old 4 core CPU somewhere that goes on
it.  I think.  I upgraded mobo and CPU since original build. 
Jude DaShiell
2023-09-19 08:20:01 UTC
Permalink
On a previous computer, I had an Alien ATX case. The one drawback with
that case was only one drive slot for a DVD drive. I prefer computer
cases with a few more than that so internal drive sleds can be installed.
When onboard cards break, if you have spare pci slots available and spare
cash you can replace those broken onboard cards with alternatives if you
can redirect the motherboards to use your replacement cards.


-- Jude <jdashiel at panix dot com> "There are four boxes to be used in
defense of liberty: soap, ballot, jury, and ammo. Please use in that
order." Ed Howdershelt 1940.
Post by Dale
Post by Dale
I tend to need quite a few PCIe slots.=C2=A0 I like to have my own v=
ideo
Post by Dale
Post by Dale
card.=C2=A0 I never liked the built in ones.
You=E2=80=99re just asking to be asked. ;-) Why don=E2=80=99t you lik=
e them? (I fear I may
Post by Dale
Post by Dale
have asked that before).
I get it when you wanna do it your way because it always worked=E2=84=
=A2 (which is
Post by Dale
Post by Dale
not wrong =E2=80=94 don=E2=80=99t misunderstand me) and perhaps you h=
ad some bad experience
Post by Dale
Post by Dale
in the past. OTOH it=E2=80=99s a pricey component usually only needed=
by gamers and
Post by Dale
Post by Dale
number crunchers. On-board graphics are just fine for Desktop and eve=
n
Post by Dale
Post by Dale
(very) light gaming and they lower power draw considerably. Give it a=
swirl,
Post by Dale
Post by Dale
maybe you like it. :) Both Intel and AMD work just fine with the kern=
el
Post by Dale
Post by Dale
drivers.
Well, for one, I usually upgrade the video card several times before I
upgrade the mobo.=C2=A0 When it is built in, not a option.=C2=A0 I thi=
nk I'm on my
Post by Dale
Post by Dale
third in this rig.
I also need multiple outputs, two at least.
That is not a problem with iGPUs. The only thing to consider is the typ=
e of
Post by Dale
video connectors on the board. Most have two classical ones, some three=
,
Post by Dale
divided among HDMI and DP. And the fancy ones use USB-C with DisplayPor=
t
Post by Dale
alternative mode. Also, dGPUs draw a lot more when using two displays.
They have added a lot of stuff to mobos since I bought one about a
decade ago.=C2=A0 Maybe things have improved.=C2=A0 I just like PCIe slot=
s and
Post by Dale
cards.=C2=A0 Gives me more options.=C2=A0 Given how things have changed t=
ho, I may
Post by Dale
have to give in on some things.=C2=A0 I just like my mobos to be like Lin=
ux.=C2=A0
Post by Dale
Have something do one thing and do it well.=C2=A0 When needed, change tha=
t
Post by Dale
thing.=C2=A0 ;-)=C2=A0
Post by Dale
One for
monitor and one for TV.=C2=A0 My little NAS box I'm currently using is=
a Dell
Post by Dale
Post by Dale
something.=C2=A0 The video works but it has no GUI.=C2=A0 At times dur=
ing the boot
Post by Dale
Post by Dale
up process, things don't scroll up the screen.=C2=A0 I may be missing =
a
Post by Dale
Post by Dale
setting somewhere but when it blanks out, it comes back with a differe=
nt
Post by Dale
Post by Dale
resolution and font size.
In case you use Grub, it has an option to keep the UEFI video mode.
So there would be no switching if UEFI already starts with the proper
resolution.
That rig is old.=C2=A0 Maybe 10 or 15 years old.=C2=A0 No UEFI on it.=C2=
=A0 Does use
Post by Dale
grub tho.=C2=A0 I duckduckgo'd it and changed some settings but last time=
I
Post by Dale
booted, it did all that blinky, blank stuff.=C2=A0 Sometimes, I wonder if=
it
Post by Dale
is hung up or crashed.=C2=A0 Then it pops up again and lets me know it is
still booting.=C2=A0 Eventually, I'll remove the monitor completely.=C2=
=A0 Then it
Post by Dale
either boots up or it doesn't.=C2=A0 I just ssh in, decrypt the drives, t=
hen
Post by Dale
mount from my main rig and start my backups.=C2=A0 I might add, this new
setup with LVM, the backups started at about the end of a previous
thread last Wednesday I think.=C2=A0 It's still copying data to the new
backup.=C2=A0 It's up the files starting with a "M".=C2=A0 The ones start=
ing with
Post by Dale
"The" is pretty big.=C2=A0 It's gonna take a while.=C2=A0 Poor drives.=C2=
=A0 o_O
Post by Dale
Post by Dale
My Gentoo box doesn't do that.=C2=A0 I can see the screen from BIOS al=
l the
Post by Dale
Post by Dale
way to when it finishes booting and the GUI comes up.=C2=A0 I'm one of=
those
Post by Dale
Post by Dale
who watches.=C2=A0 ;-)
Yeah, and it=E2=80=99s neat if there is no flickering or blanking. So m=
odern and
Post by Dale
clean.
Post by Dale
Figure the case is a
good place to start.=C2=A0 Mobo, CPU and such next.=C2=A0 Figure mob=
o will pick
Post by Dale
Post by Dale
memory for me since usually only one or two will work anyway.=C2=A0
One or two what?
One or two types of memory.=C2=A0 Usually, plain or ECC.=C2=A0 Mobos u=
sually are
Post by Dale
Post by Dale
usually pretty picky on their memory.=C2=A0
Hm=E2=80=A6 while I haven=E2=80=99t used that many different components=
in my life, so far
Post by Dale
I have not had a system not accept any RAM. Just stick to the big names=
, I
Post by Dale
guess.
I think one of my rigs uses DDR, I think my main rig is DDR3.=C2=A0 I not=
iced
Post by Dale
they are up to DDR5 now.=C2=A0 What I meant was if a mobo requires DDR4, =
that
Post by Dale
is usually all it will take.=C2=A0 Nothing else will work.=C2=A0 Whatever=
the mobo
Post by Dale
requires is what you use, just pick a good brand as you say.=C2=A0
Post by Dale
Since no one mentioned a better case, that Define thing may end up b=
eing
Post by Dale
Post by Dale
it.=C2=A0 That Gamemax is cheaper but a lot less drive capacity.=C2=
=A0 Heck, when
Post by Dale
Post by Dale
I bought my current case, which has space for five 3.5" and six 5 1/=
4"
Post by Dale
Post by Dale
drives, I thought I'd never fill up just the 3.5" ones.=C2=A0 Now, t=
he 3.5"
Post by Dale
Post by Dale
ones have been full for a while and the 5 1/4" are about full too.
Full with ODDs? Or drive cages? You can get 3=C3=973.5=E2=80=B3 cages=
which install into
Post by Dale
------------------------------------------------^^^^^
That should have been 5=C3=973.5=E2=80=B3. Too many threes and fives fl=
oatin=E2=80=99 around in
Post by Dale
my head and it=E2=80=99s getting late.
Honestly, I read it the way you meant it.=C2=A0 lol=C2=A0 I've got about =
three
Post by Dale
different kinds in my wish list.=C2=A0 Eventually, I'll take the side off=
and
Post by Dale
see which one will work.=C2=A0 I also found one that I think can be used =
as a
Post by Dale
external case.=C2=A0 It has a fan, power plug and eSATA connectors.=C2=A0=
I think
Post by Dale
it holds five drives.=C2=A0 If I get that, I just may scrap the setup I
currently have and have one large LVM setup and backup everything to one
set of drives.=C2=A0 It will also fit in my safe too.=C2=A0
My brain is starting to hurt.=C2=A0 ROFL=C2=A0
Dale
:-)=C2=A0 :-)=C2=A0
P. S.=C2=A0 I just realized, I have a old Gigabyte mobo on the shelf.=C2=
=A0 It's a
Post by Dale
old 870 I think.=C2=A0 I also have a old 4 core CPU somewhere that goes o=
n
Post by Dale
it.=C2=A0 I think.=C2=A0 I upgraded mobo and CPU since original build.=C2=
=A0
Frank Steinmetzger
2023-09-19 08:30:02 UTC
Permalink
Post by Dale
They have added a lot of stuff to mobos since I bought one about a
decade ago.  Maybe things have improved.  I just like PCIe slots and
cards.  Gives me more options.
I definitely know the feeling. That is why I went with µATX instead of ITX
nine years ago. I thought “now that I have a beefy machine, I could get a
sound card and start music production” and stuff like that. It never
happened. Aside from an entry-level GPU for some gaming (which broke two
years ago, so I am back on Intel since then) I never used any of my slots.
But in the end, they are — as you say yourself — options, not necessities.
Post by Dale
Given how things have changed tho, I may
have to give in on some things.  I just like my mobos to be like Linux. 
Have something do one thing and do it well.  When needed, change that
thing.  ;-) 
Over the past years, boards tend to do less and less by themselves. It’s all
been migrated into the CPU; voltage regulation, basic graphics, memory
controller, lots of I/O. The chipset (at least in AMD land, I’ve been out of
touch with Intel for a while now) basically determines the amount of
*additional* I/O. The Deskmini X300 mini-PC that I mentioned earlier
actually has no chipset on its board, everything is done by the CPU.

What irks me is again market segmentation. Even though Ryzen CPUs have the
capability of 10 Gbps USB 3.1 Gen 2 built-in, the low-end boards do not
route that out, not even at least one.
--
GrÌße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

The only thing that makes some people bearable is their absence.
Wols Lists
2023-09-19 08:20:01 UTC
Permalink
Post by Dale
I get it when you wanna do it your way because it always worked™ (which is
not wrong — don’t misunderstand me) and perhaps you had some bad experience
in the past. OTOH it’s a pricey component usually only needed by gamers and
number crunchers. On-board graphics are just fine for Desktop and even
(very) light gaming and they lower power draw considerably. Give it a swirl,
maybe you like it. 😄 Both Intel and AMD work just fine with the kernel
drivers.
Well, for one, I usually upgrade the video card several times before I
upgrade the mobo.  When it is built in, not a option.  I think I'm on my
third in this rig.  I also need multiple outputs, two at least.  One for
monitor and one for TV.  My little NAS box I'm currently using is a Dell
something.  The video works but it has no GUI.  At times during the boot
up process, things don't scroll up the screen.  I may be missing a
setting somewhere but when it blanks out, it comes back with a different
resolution and font size.  I figure it is blanking during the switch.
My Gentoo box doesn't do that.  I can see the screen from BIOS all the
way to when it finishes booting and the GUI comes up.  I'm one of those
who watches.  😉
Well, in my case I've only recently upgraded to a system where AGPUs are
available :-)

Plus, although I haven't got it working, I want multi-seat (at present,
my system won't boot with two video cards). You can run multi-head off
integrated graphics, but as far as I know linux requires one video card
per seat.

Oh, and to the best of my knowledge, you can combine a video card and an
AGPU.

Cheers,
Wol
Dale
2023-09-19 09:50:02 UTC
Permalink
Post by Wols Lists
Post by Dale
I get it when you wanna do it your way because it always worked™ (which is
not wrong — don’t misunderstand me) and perhaps you had some bad experience
in the past. OTOH it’s a pricey component usually only needed by gamers and
number crunchers. On-board graphics are just fine for Desktop and even
(very) light gaming and they lower power draw considerably. Give it a swirl,
maybe you like it. 😄 Both Intel and AMD work just fine with the kernel
drivers.
Well, for one, I usually upgrade the video card several times before I
upgrade the mobo.  When it is built in, not a option.  I think I'm on my
third in this rig.  I also need multiple outputs, two at least.  One for
monitor and one for TV.  My little NAS box I'm currently using is a Dell
something.  The video works but it has no GUI.  At times during the boot
up process, things don't scroll up the screen.  I may be missing a
setting somewhere but when it blanks out, it comes back with a different
resolution and font size.  I figure it is blanking during the switch.
My Gentoo box doesn't do that.  I can see the screen from BIOS all the
way to when it finishes booting and the GUI comes up.  I'm one of those
who watches.  😉
Well, in my case I've only recently upgraded to a system where AGPUs
are available :-)
Plus, although I haven't got it working, I want multi-seat (at
present, my system won't boot with two video cards). You can run
multi-head off integrated graphics, but as far as I know linux
requires one video card per seat.
Oh, and to the best of my knowledge, you can combine a video card and
an AGPU.
Cheers,
Wol
I been on Newegg using their rig builder feature.  Just to get rough
ideas, I picked a AMD Ryzen 9 5900X 12-Core 3.7 GHz Socket AM4.  Yea, I
did a copy and paste.  lol  It's a bit pricey but compared to my current
rig, I think it will run circles around it.  My current rig has a AMD FX
-8350 Eight-Core Processor running at 4GHz or so.  You think I'll see
some speed improvement or am I on the wrong track?  I'm also shooting
for 64GBs of memory at first.  I can put in two more sticks later.  I
got 32GB right now.  Thing is, I have 18 virtual desktops now.  o_O 

My problem is the mobo.  I need a few PCIe slots.  Most just don't have
enough.  Most have a slot for a video card.  Then maybe 2 other slightly
slower ones and maybe one slow one.  I can't recall what the names are
at the moment. I know the length of the connector tends to tell what
speed it is, tho some cheat and put long connectors but most of the
faster pins aren't used.  That confuses things.  Anyway, mobo, which I
will likely change, CPU and memory is already adding up to about $600. 
I don't need much of a video card tho.  The built in thing may be
enough, as long as I can connect my monitor and TV.  Either one DB15 and
a HDMI or two HDMI will work.  My monitor has both.  TV is HDMI.  Must
have TV!!

If someone knows of a good mobo, Gigabyte, ASUS preferred, that has
several PCIe slots, I'd like to know the model so I can check into it. 
It's doesn't have to be the latest thing either.  I tend to drop down
several notches from the top to save money.  I still end up with a
pretty nice rig and save some money.

I got to get a larger hard drive next month. After that, case.  Then I
start saving up to buy the other stuff.  The big thing is the combo of
mobo, CPU and memory.  I like to get them at the same time and the same
place.  Just in case the smoke gets out. :/ 

Dale

:-)  :-) 
Michael
2023-09-19 10:30:01 UTC
Permalink
Post by Wols Lists
Post by Dale
Post by Frank Steinmetzger
I get it when you wanna do it your way because it always worked™
(which is
not wrong — don’t misunderstand me) and perhaps you had some bad
experience
in the past. OTOH it’s a pricey component usually only needed by
gamers and
number crunchers. On-board graphics are just fine for Desktop and even
(very) light gaming and they lower power draw considerably. Give it a swirl,
maybe you like it. 😄 Both Intel and AMD work just fine with the kernel
drivers.
Well, for one, I usually upgrade the video card several times before I
upgrade the mobo. When it is built in, not a option. I think I'm on my
third in this rig. I also need multiple outputs, two at least. One for
monitor and one for TV. My little NAS box I'm currently using is a Dell
something. The video works but it has no GUI. At times during the boot
up process, things don't scroll up the screen. I may be missing a
setting somewhere but when it blanks out, it comes back with a different
resolution and font size. I figure it is blanking during the switch.
My Gentoo box doesn't do that. I can see the screen from BIOS all the
way to when it finishes booting and the GUI comes up. I'm one of those
who watches. 😉
Well, in my case I've only recently upgraded to a system where AGPUs
are available :-)
Plus, although I haven't got it working, I want multi-seat (at
present, my system won't boot with two video cards). You can run
multi-head off integrated graphics, but as far as I know linux
requires one video card per seat.
Oh, and to the best of my knowledge, you can combine a video card and
an AGPU.
Cheers,
Wol
I been on Newegg using their rig builder feature. Just to get rough
ideas, I picked a AMD Ryzen 9 5900X 12-Core 3.7 GHz Socket AM4. Yea, I
did a copy and paste. lol It's a bit pricey but compared to my current
rig, I think it will run circles around it. My current rig has a AMD FX
-8350 Eight-Core Processor running at 4GHz or so. You think I'll see
some speed improvement or am I on the wrong track?
You should see a significant improvement. The 5900X boosts up to 4.9GHz and
it has 24 threads.
I'm also shooting
for 64GBs of memory at first. I can put in two more sticks later. I
got 32GB right now. Thing is, I have 18 virtual desktops now. o_O
My problem is the mobo. I need a few PCIe slots. Most just don't have
enough. Most have a slot for a video card. Then maybe 2 other slightly
slower ones and maybe one slow one. I can't recall what the names are
at the moment. I know the length of the connector tends to tell what
speed it is, tho some cheat and put long connectors but most of the
faster pins aren't used. That confuses things. Anyway, mobo, which I
will likely change, CPU and memory is already adding up to about $600.
I don't need much of a video card tho. The built in thing may be
enough, as long as I can connect my monitor and TV. Either one DB15 and
a HDMI or two HDMI will work. My monitor has both. TV is HDMI. Must
have TV!!
If someone knows of a good mobo, Gigabyte, ASUS preferred, that has
several PCIe slots, I'd like to know the model so I can check into it.
It's doesn't have to be the latest thing either. I tend to drop down
several notches from the top to save money. I still end up with a
pretty nice rig and save some money.
What you describe looks more like a workstation or server tower MoBo, rather
than the current brood of retail MoBos which cater more for gaming.
Rich Freeman
2023-09-19 10:40:02 UTC
Permalink
I been on Newegg using their rig builder feature. Just to get rough
ideas, I picked a AMD Ryzen 9 5900X 12-Core 3.7 GHz Socket AM4. Yea, I
did a copy and paste. lol It's a bit pricey but compared to my current
rig, I think it will run circles around it. My current rig has a AMD FX
-8350 Eight-Core Processor running at 4GHz or so. You think I'll see
some speed improvement or am I on the wrong track?
Lol - they'd be night and day, and that's just looking at CPU. The
RAM is way faster too.

CPU mark lists the 5900X as 6-7x faster, and the 7900X as almost 9x faster.
My problem is the mobo. I need a few PCIe slots. Most just don't have
enough.
The trend is towards fewer slots. Some of that is driven by the
addition of M.2 slots which require 4 lanes each. More of the IO is
going to USB compared to PCIe, probably because that is what people
tend to use with desktops.
Most have a slot for a video card. Then maybe 2 other slightly
slower ones and maybe one slow one. I can't recall what the names are
at the moment. I know the length of the connector tends to tell what
speed it is, tho some cheat and put long connectors but most of the
faster pins aren't used.
This is actually pretty simple. PCIe is measured in lanes. There are
no slower/faster pins. There are just lanes.

A 4x slot has 4 lanes, and a 1x slot as 1 lane, and a 16x slot has 16 lanes.

What you're talking about with "faster pins not being used" is
something like a 16x slot with only 4 lanes wired. That behaves like
a 4x slot, but lets you plug in physically larger cards. The missing
12 lanes aren't any faster than the 4 lanes that are wired, but
obviously 4 lanes don't go as fast as 16 lanes.

The other factor is PCIe generation. Each generation doubles the
bandwidth, so a 1x PCIe v5 card with a supporting CPU is the same
speed as a 16x PCIe v1 card. The interface runs at the maximum
generation supported by both the card and the controller (located on
the CPU these days). Most cards don't actually support recent
generations - GPUs are the main ones that keep pace. I was talking
about 10GbE NICs earlier, and if one supported a recent enough PCIe
generation it could work fine in a 1x slot, but most use older
generations and require a 4x slot or so.

PCIe works fine if all the lanes aren't actually connected - you can
plug a 16x GPU into a 1x riser, or a 1x slot that has an open notch on
the end, and it will work fine. Though, in the latter case it will
probably need physical support as the 16x slots have locks for large
boards. The GPU will of course perform poorly with any kind of data
transfer.
That confuses things. Anyway, mobo, which I
will likely change, CPU and memory is already adding up to about $600.
If you're going to be spending THAT much on CPU+MB+RAM then I'd
seriously look at how much moving to zen4 / AM5 costs. If you can get
something cheap by going AM4 by all means do it, but if you aren't
saving significant cash then you're buying into a much older platform.
I don't need much of a video card tho.
Freeing up the 16x slot when you're so driven by PCIe requirements is
a HUGE consideration here.
If someone knows of a good mobo, Gigabyte, ASUS preferred, that has
several PCIe slots, I'd like to know the model so I can check into it.
I think you need to rethink your approach. Look, there is no reason
you shouldn't be able to find a reasonably-priced motherboard that has
lots of PCIe slots. If nothing else the manufacturer could stick a
switch on the board, especially if you don't need PCIe v5 and don't
mind the board switching the v5 lanes into a ton of v3-4 ones.
However, nobody makes anything like that for consumers. There are
chips out there that do some of that, but you'd have to custom-build
something to use them.

You really need to figure out how to get by with mostly 1x cards, and
maybe 1-2 larger ones if you ditch the GPU. That is part of what
drove me to distributed storage, and also using USB3 for large numbers
of hard drives. PCs tend to have lots of unused USB3 capacity, and
that works fine for spinning disks. It just looks ugly. (As a bonus
the USB3 disks can often be obtained far cheaper.)
--
Rich
Frank Steinmetzger
2023-09-19 12:30:01 UTC
Permalink
Post by Wols Lists
Oh, and to the best of my knowledge, you can combine a video card and
an AGPU.
BTW: it’s APU, without the G. Because it is an Accellerated Processing Unit
(i.e. a processor), not a GPU.
I been on Newegg using their rig builder feature.  Just to get rough
ideas, I picked a AMD Ryzen 9 5900X 12-Core 3.7 GHz Socket AM4.  Yea, I
did a copy and paste.  lol  It's a bit pricey but compared to my current
rig, I think it will run circles around it.  My current rig has a AMD FX
-8350 Eight-Core Processor running at 4GHz or so.  You think I'll see
some speed improvement or am I on the wrong track?
Twice the single-thread performance and 7 times multi-core:
https://www.cpubenchmark.net/cpu.php?cpu=AMD+FX-8350+Eight-Core&id=1780
https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+9+5900X&id=3870
Naturally at lower power consumption as well.
My problem is the mobo.  I need a few PCIe slots.  Most just don't have
enough.  Most have a slot for a video card.  Then maybe 2 other slightly
slower ones and maybe one slow one.  I can't recall what the names are
at the moment. I know the length of the connector tends to tell what
speed it is, tho some cheat and put long connectors but most of the
faster pins aren't used.  That confuses things.
Well they allow you to put larger cards in, but they don’t have the lanes
for it. Somewhere else in the thread was mentioned that the number of lanes
is very limited. Only the main slot (the big one for the GPU) is directly
connected to the CPU. The rest is hooked up to the chipset which itself is
connected to the CPU either via PCIe×4 (AMD) or whatchacallit (DMI?) for
Intel.
Anyway, mobo, which I
will likely change, CPU and memory is already adding up to about $600. 
I don't need much of a video card tho.  The built in thing may be
enough, as long as I can connect my monitor and TV.
The 5900X has no built-in. For the Ryzen 5000 series, only those with -G
have graphics. The 7000 ones all have a basic GPU (may except for some with
another suffix).
If someone knows of a good mobo, Gigabyte, ASUS preferred, that has
several PCIe slots, I'd like to know the model so I can check into it. 
It's doesn't have to be the latest thing either.  I tend to drop down
several notches from the top to save money.  I still end up with a
pretty nice rig and save some money.
Look for youself and filter what you need, like 1 or 2 HDMI, DP and PCIe:
AM4: https://skinflint.co.uk/?cat=mbam4&xf=18869_4%7E4400_ATX
AM5: https://skinflint.co.uk/?cat=mbam5&xf=18869_4%7E4400_ATX
Interestingly: the filter goes up to 6 PCIe slots for the former, but only to
4 for the latter.
--
GrÌße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Wires are either too short, not available or don’t work.
Rich Freeman
2023-09-19 13:20:01 UTC
Permalink
BTW: it’s APU, without the G. Because it is an Accellerated Processing Unit
(i.e. a processor), not a GPU.
No real "reason" for it besides branding/naming/etc. They could have
called it a MPU for Mixed Processing Unit if they wanted, and no doubt
somebody would come up with a nice explanation about how that is the
only term that makes sense.

The GPU in a processor with an integrated GPU is a GPU like just about
any other. It might not have dedicated memory, and it might not be as
big, but they do the same thing.
Well they allow you to put larger cards in, but they don’t have the lanes
for it. Somewhere else in the thread was mentioned that the number of lanes
is very limited. Only the main slot (the big one for the GPU) is directly
connected to the CPU. The rest is hooked up to the chipset which itself is
connected to the CPU either via PCIe×4 (AMD) or whatchacallit (DMI?) for
Intel.
So, on most AMD boards these days all the PCIe lanes are wired to the
CPU I believe. The higher-end motherboards have switches, and not all
the lanes may be the highest supported generation, but I don't think
any modern AMD motherboards have any kind of PCIe controller on them.

Basically memory, USB, and PCIe are all getting so fast that trying to
implement a whole bunch of separate controller chips just doesn't make
sense. They benefit from higher end fabs, though not necessarily
quite as high-end as the processor cores themselves (hence the AMD
chiplet design). A single PCIe v5 lane is moving data at 32Gb/s.
Plus if you were going to consolidate things at that speed on the
motherboard you'd need one heck of a pipe to get it from there to the
CPU anyway (something the CPU has internally).

It looks like Intel still puts a controller on the motherboard

AM5 has 28 PCIe v5 lanes, 4 PCIe v4 lanes, and 4 20Gbps USB3 ports.
LGA1700+MB has 16 PCIe v5 lanes, 16 PCIe v4 lanes, 16 PCI v3 lanes,
and quite a bit more USB3 (though that is via the MB so I'm not sure
it can sustain them all at max).

Not meant as an Intel vs AMD comparison as I'm sure there are caveats
in the details, and individual motherboards use that IO differently,
but just meant to give a sense of what these desktop CPUs typically
deliver.

In contrast here are the latest server sockets:

SP5 (AMD) has 128 PCIe v5 lanes, and 4 20Gbps USB3 ports (and 16
memory channels is of course a big selling point - vs 4 on AM5)
LGA 4677 (Intel) has 16 PCIe v4 lanes and 12 PCIe v3 lanes, and again
more USB3 (all via the MB). I'm actually kinda surprised how few
lanes it has. It also only has 8 memory channels.

Seems like PCIe v5 isn't as much of a selling point on servers.

If I missed some detail please point it out - I mainly run AMD desktop
CPUs so there could be some server/Intel capabilities out there I'm
less familiar with. With the Intel approach of putting more on the
motherboard I suspect that there might be bottlenecks if all that IO
were to be used at once, though that does seem unlikely.
AM4: https://skinflint.co.uk/?cat=mbam4&xf=18869_4%7E4400_ATX
AM5: https://skinflint.co.uk/?cat=mbam5&xf=18869_4%7E4400_ATX
Interestingly: the filter goes up to 6 PCIe slots for the former, but only to
4 for the latter.
You can definitely get more PCIe slots on AM5, but the trend is to
have less in general. Look at the X670 chipset boards as those tend
to have PCIe switches which give them more lanes. The switched
interfaces will generally not support PCIe v5.

That said, the SATA ports tend to take up lanes (AM5 has no SATA
support on the CPU), so motherboards that have 4x/2x slots available
might disable some SATA ports if they use them.

The trend is definitely more towards M.2 and those each eat up 4 lanes.

In any case, if what you want is lots of IO, I guess you can shell out
for an EPYC...
--
Rich
Frank Steinmetzger
2023-09-19 14:40:02 UTC
Permalink
Post by Rich Freeman
Post by Frank Steinmetzger
Well they allow you to put larger cards in, but they don’t have the lanes
for it. Somewhere else in the thread was mentioned that the number of lanes
is very limited. Only the main slot (the big one for the GPU) is directly
connected to the CPU. The rest is hooked up to the chipset which itself is
connected to the CPU either via PCIe×4 (AMD) or whatchacallit (DMI?) for
Intel.
So, on most AMD boards these days all the PCIe lanes are wired to the
CPU I believe.
Not all. Only the main slot. The rest is routed through the chipset. I’m
only speaking of expansion slots here. But for NVMe it is similar: the
primary one is attached to the CPU, any other is connected via the chipset.
This is for AM4. AM5 provides two NVMes.
Post by Rich Freeman
The higher-end motherboards have switches, and not all
the lanes may be the highest supported generation, but I don't think
any modern AMD motherboards have any kind of PCIe controller on them.
Here are the I/O capabilities of the socket:
https://www.reddit.com/r/Amd/comments/bus60i/amd_x570_detailed_block_diagram_pcie_lanes_and_io/
A slight problem is that it is connected to the CPU by only 4.0×4. So tough
luck if you want to do parallel high-speed stuff with two PCIe×4 M.2 drives.
Post by Rich Freeman
Basically memory, USB, and PCIe are all getting so fast that trying to
implement a whole bunch of separate controller chips just doesn't make
sense.
However, the CPU has a limited number of them, hence there are more in the
chipset. Most notably SATA.
Post by Rich Freeman
Post by Frank Steinmetzger
AM4: https://skinflint.co.uk/?cat=mbam4&xf=18869_4%7E4400_ATX
AM5: https://skinflint.co.uk/?cat=mbam5&xf=18869_4%7E4400_ATX
Interestingly: the filter goes up to 6 PCIe slots for the former, but only to
4 for the latter.
You can definitely get more PCIe slots on AM5, but the trend is to
have less in general.
Those look really weird. “Huge” ATX boards, but all covered up with fancy
gamer-style plastics lids and only two slots poking out.
Post by Rich Freeman
Look at the X670 chipset boards as those tend to have PCIe switches which
give them more lanes. The switched interfaces will generally not support
PCIe v5.
The X series are two “B-chipset chips” daisychained together to double the
downstream connections. Meaning one sits behind the other from the POV of
the CPU and they share their uplink.

Here are some nice block diagrams of the different AM5 chipset families:
https://www.hwcooling.net/en/amd-am5-platform-b650-x670-x670e-chipsets-and-how-they-differ/
--
GrÌße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Greet every douche, for he may be your superior tomorrow.
Rich Freeman
2023-09-19 15:10:01 UTC
Permalink
Post by Frank Steinmetzger
Post by Rich Freeman
The higher-end motherboards have switches, and not all
the lanes may be the highest supported generation, but I don't think
any modern AMD motherboards have any kind of PCIe controller on them.
https://www.reddit.com/r/Amd/comments/bus60i/amd_x570_detailed_block_diagram_pcie_lanes_and_io/
So, that is AM4, not AM5
Post by Frank Steinmetzger
A slight problem is that it is connected to the CPU by only 4.0×4. So tough
luck if you want to do parallel high-speed stuff with two PCIe×4 M.2 drives.
So, that block diagram is a bit weak. If you look on the left side it
clearly shows 20 PCIe lanes, and the GPU only needs 16. So there are
8 lanes for the MB chipset to use. The 4 on the left aren't the same
as the 4 on the right I think.

Again, that is AM4 which I haven't looked into as much. AM5 increases
the v5 lanes and still has some v4 lanes.
Post by Frank Steinmetzger
Post by Rich Freeman
Basically memory, USB, and PCIe are all getting so fast that trying to
implement a whole bunch of separate controller chips just doesn't make
sense.
However, the CPU has a limited number of them, hence there are more in the
chipset. Most notably SATA.
Yup, especially since AM5 dropped SATA entirely. The chipset would be
using PCIe lanes for SATA.
Post by Frank Steinmetzger
Those look really weird. “Huge” ATX boards, but all covered up with fancy
gamer-style plastics lids and only two slots poking out.
Yeah, that is definitely the trend. Few are using PCIe cards, so they
aren't supporting as many.

In theory they could take one PCIe v5 lane on the board and run it
into a switch and provide 4 more 1x v3 lanes for older expansion
cards, and so on. Those v5 lanes can move a lot of data and other
than the GPU and maybe NVMe little is using them.

All the same desktop CPUs are a bit starved for lanes.
Post by Frank Steinmetzger
Post by Rich Freeman
Look at the X670 chipset boards as those tend to have PCIe switches which
give them more lanes. The switched interfaces will generally not support
PCIe v5.
The X series are two “B-chipset chips” daisychained together to double the
downstream connections. Meaning one sits behind the other from the POV of
the CPU and they share their uplink.
https://www.hwcooling.net/en/amd-am5-platform-b650-x670-x670e-chipsets-and-how-they-differ/
Thanks - that site is handy.

I'm sure PCIe v5 switching is hard/expensive, but they definitely
could mix things up however they want. The reality is that most IO
devices aren't going to be busy all the time, so you definitely could
split 8 lanes up 64 ways, especially if you drop a generation or two
along the way. It is all packet switched so it is really no different
than having a 24 port gigabit network switch with a 10Gb uplink - sure
in theory the uplink could be saturated but typically it would not be.

Ultimately though the problem is supply and demand. There just isn't
much demand for consumer boards with stacks of expansion cards, so
nobody makes them. They'd rather give you more M.2 slots, USB, or
just make the CPU cheaper.

That is why I've been trying to change how I design my storage/etc.
Rather than trying to find the one motherboard+HBA combo that lets me
cram 16 drives into one case, it is WAY easier to get a bunch of $100
used corporate SFF desktops, slap a 10GbE NIC in them, and plug USB3
hard drives into them. The drives still perform about as fast, and it
is infinitely expandable. If anything breaks it can be readily
replaced by commoditized hardware. Hardest part is just making sure
the SFF PC has a 16x slot and integrated graphics, but that isn't too
big of an ask.

Server hardware definitely avoids many of the limitations, but it just
tends to be super-expensive. Granted, I haven't been looking on eBay
for used stuff. The used desktop gear at least tends to be reasonably
low-power - you have to watch the server gear as the older stuff can
tend to guzzle power (newer stuff isn't as bad). Granted, you can
definitely find server hardware that can accomodate 12+ drives.
--
Rich
Frank Steinmetzger
2023-09-19 17:10:01 UTC
Permalink
Post by Rich Freeman
Post by Frank Steinmetzger
Post by Rich Freeman
The higher-end motherboards have switches, and not all
the lanes may be the highest supported generation, but I don't think
any modern AMD motherboards have any kind of PCIe controller on them.
https://www.reddit.com/r/Amd/comments/bus60i/amd_x570_detailed_block_diagram_pcie_lanes_and_io/
So, that is AM4, not AM5
Yup. I kept on rambling about AM4, because that’s what I laid my eyes on
(and so did Dale a few posts up).
Post by Rich Freeman
Post by Frank Steinmetzger
A slight problem is that it is connected to the CPU by only 4.0×4. So tough
luck if you want to do parallel high-speed stuff with two PCIe×4 M.2 drives.
So, that block diagram is a bit weak. If you look on the left side it
clearly shows 20 PCIe lanes, and the GPU only needs 16. So there are
8 lanes for the MB chipset to use.
No, the chipset downlink is always four lanes wide. PCIe 4.0 for most
AM4 CPUs, but PCIe 3.0 for the monolithic APUs (because they don’t have
4.0 at all, as their I/O die is different). The remaining four lanes are
reserved for an NVMe slot.
Post by Rich Freeman
The 4 on the left aren't the same as the 4 on the right I think.
The diagram is indeed a bit confused in that part.
Post by Rich Freeman
Again, that is AM4 which I haven't looked into as much. AM5 increases
the v5 lanes and still has some v4 lanes.
AFAIR, PCIe 5 is only guaranteed for the NVMe slot. The rest is optional or
subject to the chipset. As in the A series doesn’t have it, stuff like that.
But it’s been a while since I read about that, so my memory is hazy.
Post by Rich Freeman
All the same desktop CPUs are a bit starved for lanes.
Hey we did get four more now with AM5 vs. AM4.
Post by Rich Freeman
I'm sure PCIe v5 switching is hard/expensive, but they definitely
could mix things up however they want. The reality is that most IO
devices aren't going to be busy all the time, so you definitely could
split 8 lanes up 64 ways, especially if you drop a generation or two
along the way.
Unfortunately you can’t put low-speed connectors on a marketing sheet, when
competitors have teh shizz.
Post by Rich Freeman
Server hardware definitely avoids many of the limitations, but it just
tends to be super-expensive.
Which is funny because with the global cloud trend, you would think that its
supply increases and prices go down.
--
GrÌße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

If you were born feet-first, then, for a short moment,
you wore your mother as a hat.
Rich Freeman
2023-09-19 20:20:01 UTC
Permalink
Post by Frank Steinmetzger
No, the chipset downlink is always four lanes wide.
The diagram you linked has 8, but I can't vouch for its accuracy.
Haven't looked into it for AM4.
Post by Frank Steinmetzger
Post by Rich Freeman
Again, that is AM4 which I haven't looked into as much. AM5 increases
the v5 lanes and still has some v4 lanes.
AFAIR, PCIe 5 is only guaranteed for the NVMe slot. The rest is optional or
subject to the chipset.
Actually, PCIe v5 isn't guaranteed for the NVMe slot either, or even
the first 16x slot. It is all subject to the motherboard design.
There are AM5 MBs that don't have any PCIe v5 slots.
Post by Frank Steinmetzger
Post by Rich Freeman
I'm sure PCIe v5 switching is hard/expensive, but they definitely
could mix things up however they want. The reality is that most IO
devices aren't going to be busy all the time, so you definitely could
split 8 lanes up 64 ways, especially if you drop a generation or two
along the way.
Unfortunately you can’t put low-speed connectors on a marketing sheet, when
competitors have teh shizz.
Well, you can, but they don't fit on a tweet. Just my really long emails...

We're not their target demographic in any case. Now, if Dale wanted
more RGB lights and transparent water hoses, and not more PCIe slots,
the market would be happy to supply...
Post by Frank Steinmetzger
Post by Rich Freeman
Server hardware definitely avoids many of the limitations, but it just
tends to be super-expensive.
Which is funny because with the global cloud trend, you would think that its
supply increases and prices go down.
I think the problem is that the buyers are way less price-sensitive.

When a medium/large company is buying a server, odds are they're
spending at least tens of thousands of dollars on the
software/maintenance side of the project, if not hundreds of thousands
or more. They also like to standardize on hardware, so they'll pick
the one-size-fits-all solution that can work in any situation, even if
it is pricey. Paying $5k for a server isn't a big deal, especially if
it is reliable/etc so that it can be neglected for 5 years (since
touching it involves dragging in the project team again, which
involves spending $15k worth of time just getting the project
approved).

The place where they are price-sensitive is on really large-scale
operations, like cloud providers, Google, social media, and so on -
where they need tens of thousands of identical servers. These
companies would create demand for very efficiently-priced hardware.
However, at their scale they can afford to custom develop their own
stuff, and they don't sell to the public, so while that cheap server
hardware exists, you can't obtain it. Plus it will be very tailored
to their specific use case. If Google needs a gazillion workers for
their search engine they might have tensor cores and lots of CPU, and
maybe almost no RAM/storage. If they need local storage they might
have one M.2 slot and no PCIe slots at all, or some other lopsided
config. Backblaze has their storage pods that are basically one giant
stack of HDD replicators and almost nothing else. They probably don't
even have sideband management on their hardware, or if they do it is
something integrated with their own custom solutions.

Oh, the other big user is the US government, and they're happy to pay
for a million of those $5k servers as long as they're assembled in the
right congressional districts. Reducing the spending probably reduces
the number of jobs, so that is an anti-feature... :)
--
Rich
Grant Edwards
2023-09-19 13:00:01 UTC
Permalink
Post by Dale
Well, for one, I usually upgrade the video card several times before I
upgrade the mobo.  When it is built in, not a option.  I think I'm on my
third in this rig.  I also need multiple outputs, two at least.  One for
monitor and one for TV.
The built-in Intel video on an oldish Intel i5 at the office is
currently driving 3 displays. The built-in video on the AMD at home is
driving 2 and, IIRC, could handle 2 more.

--
Grant
Dale
2023-09-20 01:40:01 UTC
Permalink
Post by Grant Edwards
Post by Dale
Well, for one, I usually upgrade the video card several times before I
upgrade the mobo.  When it is built in, not a option.  I think I'm on my
third in this rig.  I also need multiple outputs, two at least.  One for
monitor and one for TV.
The built-in Intel video on an oldish Intel i5 at the office is
currently driving 3 displays. The built-in video on the AMD at home is
driving 2 and, IIRC, could handle 2 more.
--
Grant
.
Then maybe I can use the onboard one.  At least I know it is a option. 
Most of the mobos I've seen, shich are older by the way, have only one
port, usually a DB15.  I think I got one around here somewhere that has
a HDMI, I think. 

That's good to know.

Dale

:-)  :-) 
Grant Edwards
2023-09-20 03:00:02 UTC
Permalink
Post by Dale
Post by Grant Edwards
The built-in Intel video on an oldish Intel i5 at the office is
currently driving 3 displays. The built-in video on the AMD at home is
driving 2 and, IIRC, could handle 2 more.
Then maybe I can use the onboard one.  At least I know it is a option. 
Most of the mobos I've seen, shich are older by the way, have only one
port, usually a DB15.  I think I got one around here somewhere that has
a HDMI, I think. 
The old i5 used to have an NVidia 3xx Quadro board installed which had
a dual DisplayPort pigtail cable with DisplayPort to DVI adapters to
drive two 1600x1200 monitors. I wasn't using the built-in graphics at
all because we've all known for decades that built-in graphics were
useless, right?

Then the pandemic happened, and I brought the NVidia card and one of
those monitors home for the duration, leaving the other monitor
plugged in to the i5 motherboard's DVI output.

Not too long after that, NVidia stopped supporting the Quadro card. I
got to a point where I needed to update the kernel for [some reason].
But, the NVidia driver wasn't available for a kernel that recent. The
i5 motherboard I had at home at the time had DVI, HDMI, and DB15
connectors on the back. I sort of assumed that the built-in graphics
could only mirror the same image onto multiple displays, but once I
got the right cables, it drove a 1600x1200 and a 1920x1200 at full
resolution with no problems. The one thing the built-in graphics
couldn't do is provide two separate X11 displays (instead of one
virtual display that's spread out over two monitors). For various
reasons I had always run multiple separate X11 desktops on NVidia
cards rather than one desktop spread over multiple monitors. But I got
used to the single large virtual desktop setup.

I've since replaced the home i5 machine with a Ryzen 5 3400G, and it
was definitely a step up in video performance.

Then I acquired a couple more monitors so that I had three at the
office. That i5 motherboard has DVI, HDMI, mini-DisplayPort and DB15
connectors. With the right adapter cables, I was able to connect two
1600x1200 monitors to DVI and HDMI, plus a 1600x900 monitor to the
mini-DP port. It drives all of them at their native resolutions.

I don't do any heavy duty gaming or 3D stuff, so I can't vouch for
performance in that area. But both the i5 and Ryzen 5 have HW
direct-rending 3D support, and the RC heli/plane flight simulator I do
play with seems happy enough (the two year old Ryzen 3400G does
maintain noticably higher frame-rates than the ten year old i5-3570K).

Neither one of these processors was top of the class for integrated
graphics when they were introduced. I tend to go for lower TDP to
keep fan noise down, and that limits GPU performance.

--
Grant
Dale
2023-09-20 04:40:01 UTC
Permalink
Post by Grant Edwards
Post by Dale
Post by Grant Edwards
The built-in Intel video on an oldish Intel i5 at the office is
currently driving 3 displays. The built-in video on the AMD at home is
driving 2 and, IIRC, could handle 2 more.
Then maybe I can use the onboard one.  At least I know it is a option. 
Most of the mobos I've seen, shich are older by the way, have only one
port, usually a DB15.  I think I got one around here somewhere that has
a HDMI, I think. 
The old i5 used to have an NVidia 3xx Quadro board installed which had
a dual DisplayPort pigtail cable with DisplayPort to DVI adapters to
drive two 1600x1200 monitors. I wasn't using the built-in graphics at
all because we've all known for decades that built-in graphics were
useless, right?
Then the pandemic happened, and I brought the NVidia card and one of
those monitors home for the duration, leaving the other monitor
plugged in to the i5 motherboard's DVI output.
Not too long after that, NVidia stopped supporting the Quadro card. I
got to a point where I needed to update the kernel for [some reason].
But, the NVidia driver wasn't available for a kernel that recent. The
i5 motherboard I had at home at the time had DVI, HDMI, and DB15
connectors on the back. I sort of assumed that the built-in graphics
could only mirror the same image onto multiple displays, but once I
got the right cables, it drove a 1600x1200 and a 1920x1200 at full
resolution with no problems. The one thing the built-in graphics
couldn't do is provide two separate X11 displays (instead of one
virtual display that's spread out over two monitors). For various
reasons I had always run multiple separate X11 desktops on NVidia
cards rather than one desktop spread over multiple monitors. But I got
used to the single large virtual desktop setup.
I've since replaced the home i5 machine with a Ryzen 5 3400G, and it
was definitely a step up in video performance.
Then I acquired a couple more monitors so that I had three at the
office. That i5 motherboard has DVI, HDMI, mini-DisplayPort and DB15
connectors. With the right adapter cables, I was able to connect two
1600x1200 monitors to DVI and HDMI, plus a 1600x900 monitor to the
mini-DP port. It drives all of them at their native resolutions.
I don't do any heavy duty gaming or 3D stuff, so I can't vouch for
performance in that area. But both the i5 and Ryzen 5 have HW
direct-rending 3D support, and the RC heli/plane flight simulator I do
play with seems happy enough (the two year old Ryzen 3400G does
maintain noticably higher frame-rates than the ten year old i5-3570K).
Neither one of these processors was top of the class for integrated
graphics when they were introduced. I tend to go for lower TDP to
keep fan noise down, and that limits GPU performance.
--
Grant
The way my displays are set up is like this.  In nvidia-settings, I set
my monitor as primary.  In nvidia-settings, I set the second display,
sometimes called screen 1, to be to the right of primary display. 
Primary is also called screen 0 in places.  Names seem to change at
times.  My usual computer stuff is on the primary screen, screen 0. 
However, to watch TV, I right click on file and pick smplayer.  Now
smplayer is set to go to screen 1, right display, automatically and go
full screen.  Also, sound is set to go there as well.  It behaves just
the same as it does when I'm watching some other device, cable etc. 

Neither of the displays takes much as far as power goes.  Heck, a lot of
videos I watch are only 720p anyway.  From what you describe, that
should be more than enough for my uses.  It seems that the way on board
stuff works even with Linux has come a long ways.  Sounds like the Linux
drivers have come a long ways too. 

This is good to know.  It helps with some options at least. 

Dale

:-)  :-) 
Wols Lists
2023-09-19 08:30:02 UTC
Permalink
Post by Frank Steinmetzger
With so many drives, you should also include a pricey power supply. And/or a
server board which supports staggered spin-up. Also, drives of the home NAS
category (and consumer drives anyways) are only certified for operation in
groups of up to 8-ish. Anything above and you sail in grey warranty waters.
Higher-tier drives are specced for the vibrations of so many drives (at
least I hope, because that’s what they™ tell us).
Have you seen the article where somebody tests that? And yes, it's true.
The more drives you have, the more you need damping. If all the drives
move their heads together, the harder it is for them to home in on the
correct track, to the point where you get the "perfect storm" of
vibration causing them all to reset, go back to park, try again, and
they are shaking so much none of them can find what they're looking for.
Post by Frank Steinmetzger
Post by Dale
To be honest, I kinda like the Fractal Design Define 7
XL right now despite the higher cost.  I could make a NAS/backup box
with it and I doubt I'd run out of drive space even if I started using
RAID and mirrored everything, at a minimum.
With 12 drives, I would go for parity RAID with two parity drives per six
drives, not for a mirror. That way you get 2/3 storage efficiency vs. 1/2
and more robustness; in parity, any two drives may fail, but in a cluster of
mirrors, only specific drives may fail (not two of the same mirror). If the
drives are huge, nine drives with three parity drives may be even better
(because rebuilds get scarier the bigger the drives get).
One of my projects in my copious (not) free time was to try and
implement raid-61. Like raid-10, you could spread it across any number
of drives (subject to a minimum). You could lose any 4 drives which
gives you a minimum of five (although with that few that would be the
equivalent of a five-times mirror).

Hey ho, I don't think that's going to happen now.

Cheers,
Wol
Dale
2023-11-10 06:00:02 UTC
Permalink
Post by Dale
Howdy,
This is a work in progress and may take some time, financially if
nothing else.  With hindsight, I wish I had done this before the price
of everything went up but some things are getting more reasonable. My
first task, a case.  At this point, I may build a new system in the new
case, or, I might build the new system in my current case.  It depends
on which case I buy.  A cube shape wouldn't work for my main system.  It
would take up to much space, it would however make a great NAS box.  My
current case is a Cooler Master HAF-932 with those huge 200mm fans.  It
has a top fan and that thing removes a lot of warm air.  A top fan
really improves heat removal.  After all, heat naturally rises.  So any
case that has a top fan gets extra points with me. 
<<<SNIP>>>
So, anyone think they know of a case that might beat those, especially
the Define 7, and still have a price that is reasonable?  I also look at
cooling fans.  I really like how cool my current case stays.  Also, I'm
in the USA so needs to be available here.  I mostly use Ebay but
sometimes use Amazon.  I've also used Newegg, Tigerdirect and a couple
other sites as well.  If you live outside the USA, brand and model will
do and I can look it up to see if it is available here or not. 
<<<SNIP>>>
Dale
:-)  :-)
A little update.  I got the Fractal Design Define 7 XL in earlier
today.  It is huge.  Size wise, it is about like my Cooler Master
HAF-932.  It's just more boxy where the Cooler Master has some angled
edges.  Anyway, it is very different than any case I've ever seen on the
inside.  All the panels has these little ball things that snap them in
and it feels like there may be a magnet in places but could just be the
ball things.  Hard to tell.  I think it will hold a lot of drives
eventually.  It can be made into a super nice NAS box tho.  I mean super
nice. 

The one thing I don't like.  Reading the description, it makes it sound
like it comes with all the cages needed for all hard drives to be
mounted.  It does not come with those.  They are extra and don't always
come in a set.  One part can be the cage part of it.  Another part can
be the little tray that the drive actually mounts on and slides into the
cage.  The cage part is about $20.  Both pieces are about $35 bought
together.  I'm looking on ebay at the moment.  I may can get a better
deal from Fractal tho.  Still, be prepared for not getting it really
complete if you plan to install lots of drives right away. 

If you don't like top mounted fans, that is a option.  It has a solid
panel for the top.  Me, I like top mounted fans.  Heat rises, why fight
it.  For that, it comes with a really nice mesh thing for the top that
looks OK.  I haven't put it on yet tho.  Given the number of fans, I may
try it without out the top fans.  Just to see.  I do kinda wish it had a
side fan but the side is all glass. 

It is pricey.  Given I still have to buy the cage kits, about 4 or 5 I
think, that drives the cost up even more.  Still, it is a nice case.  I
just wish it came with a complete set of drive cages. 

There are quite a few pics and videos of the thing already so I'm not
going to pollute the mailing list with those.  If someone wants a pic of
something, I could send it off list.  Honestly tho, there are a lot
already around to search for and find. 

Oh, it is heavy.  Packed up it weighs in at 42 lbs.  The box might weigh
a few pounds.  I'd guess, about 38 maybe 39 lbs. 

Since my water heater decided to die, that took my puter money.  Gotta
build up again to buy mobo, CPU and memory.  Maybe prices will drop a
little.  :/ 

On video cards.  I bought a 4 pack of PCIe version 2 cards.  I'm pretty
sure my new mobo will be PCIe V3.  I think I got that right. Will that
slower card work in there?  If so, will it slow anything else down?  Or
do I need to get a matched set? 

Thanks to all.  This may take a little longer than planned after the
water heater failure.  Dang near $600. 

Dale

:-)  :-) 
Jude DaShiell
2023-11-10 13:20:02 UTC
Permalink
On one computer I had it came with an Alien ATX case. If memory serves
that one had a top fan. The only thing I didn't like about that case was
not enough slots for two drive sleds in addition to the dvd burner. Only
one sled could be accommodated.


-- Jude <jdashiel at panix dot com> "There are four boxes to be used in
defense of liberty: soap, ballot, jury, and ammo. Please use in that
order." Ed Howdershelt 1940.
Post by Dale
Howdy,
This is a work in progress and may take some time, financially if
nothing else.=C2=A0 With hindsight, I wish I had done this before the p=
rice
Post by Dale
of everything went up but some things are getting more reasonable. My
first task, a case.=C2=A0 At this point, I may build a new system in th=
e new
Post by Dale
case, or, I might build the new system in my current case.=C2=A0 It dep=
ends
Post by Dale
on which case I buy.=C2=A0 A cube shape wouldn't work for my main syste=
m.=C2=A0 It
Post by Dale
would take up to much space, it would however make a great NAS box.=C2=
=A0 My
Post by Dale
current case is a Cooler Master HAF-932 with those huge 200mm fans.=C2=
=A0 It
Post by Dale
has a top fan and that thing removes a lot of warm air.=C2=A0 A top fan
really improves heat removal.=C2=A0 After all, heat naturally rises.=C2=
=A0 So any
Post by Dale
case that has a top fan gets extra points with me.=C2=A0
<<<SNIP>>>
So, anyone think they know of a case that might beat those, especially
the Define 7, and still have a price that is reasonable?=C2=A0 I also l=
ook at
Post by Dale
cooling fans.=C2=A0 I really like how cool my current case stays.=C2=A0=
Also, I'm
Post by Dale
in the USA so needs to be available here.=C2=A0 I mostly use Ebay but
sometimes use Amazon.=C2=A0 I've also used Newegg, Tigerdirect and a co=
uple
Post by Dale
other sites as well.=C2=A0 If you live outside the USA, brand and model=
will
Post by Dale
do and I can look it up to see if it is available here or not.=C2=A0
<<<SNIP>>>
Dale
:-)=C2=A0 :-)
A little update.=C2=A0 I got the Fractal Design Define 7 XL in earlier
today.=C2=A0 It is huge.=C2=A0 Size wise, it is about like my Cooler Mast=
er
HAF-932.=C2=A0 It's just more boxy where the Cooler Master has some angle=
d
edges.=C2=A0 Anyway, it is very different than any case I've ever seen on=
the
inside.=C2=A0 All the panels has these little ball things that snap them =
in
and it feels like there may be a magnet in places but could just be the
ball things.=C2=A0 Hard to tell.=C2=A0 I think it will hold a lot of driv=
es
eventually.=C2=A0 It can be made into a super nice NAS box tho.=C2=A0 I m=
ean super
nice.=C2=A0
The one thing I don't like.=C2=A0 Reading the description, it makes it so=
und
like it comes with all the cages needed for all hard drives to be
mounted.=C2=A0 It does not come with those.=C2=A0 They are extra and don'=
t always
come in a set.=C2=A0 One part can be the cage part of it.=C2=A0 Another p=
art can
be the little tray that the drive actually mounts on and slides into the
cage.=C2=A0 The cage part is about $20.=C2=A0 Both pieces are about $35 b=
ought
together.=C2=A0 I'm looking on ebay at the moment.=C2=A0 I may can get a =
better
deal from Fractal tho.=C2=A0 Still, be prepared for not getting it really
complete if you plan to install lots of drives right away.=C2=A0
If you don't like top mounted fans, that is a option.=C2=A0 It has a soli=
d
panel for the top.=C2=A0 Me, I like top mounted fans.=C2=A0 Heat rises, w=
hy fight
it.=C2=A0 For that, it comes with a really nice mesh thing for the top th=
at
looks OK.=C2=A0 I haven't put it on yet tho.=C2=A0 Given the number of fa=
ns, I may
try it without out the top fans.=C2=A0 Just to see.=C2=A0 I do kinda wish=
it had a
side fan but the side is all glass.=C2=A0
It is pricey.=C2=A0 Given I still have to buy the cage kits, about 4 or 5=
I
think, that drives the cost up even more.=C2=A0 Still, it is a nice case.=
=C2=A0 I
just wish it came with a complete set of drive cages.=C2=A0
There are quite a few pics and videos of the thing already so I'm not
going to pollute the mailing list with those.=C2=A0 If someone wants a pi=
c of
something, I could send it off list.=C2=A0 Honestly tho, there are a lot
already around to search for and find.=C2=A0
Oh, it is heavy.=C2=A0 Packed up it weighs in at 42 lbs.=C2=A0 The box mi=
ght weigh
a few pounds.=C2=A0 I'd guess, about 38 maybe 39 lbs.=C2=A0
Since my water heater decided to die, that took my puter money.=C2=A0 Got=
ta
build up again to buy mobo, CPU and memory.=C2=A0 Maybe prices will drop =
a
little.=C2=A0 :/=C2=A0
On video cards.=C2=A0 I bought a 4 pack of PCIe version 2 cards.=C2=A0 I'=
m pretty
sure my new mobo will be PCIe V3.=C2=A0 I think I got that right. Will th=
at
slower card work in there?=C2=A0 If so, will it slow anything else down?=
=C2=A0 Or
do I need to get a matched set?=C2=A0
Thanks to all.=C2=A0 This may take a little longer than planned after the
water heater failure.=C2=A0 Dang near $600.=C2=A0
Dale
:-)=C2=A0 :-)=C2=A0
t***@sys-concept.com
2023-11-10 17:10:01 UTC
Permalink
Thelma
Post by Dale
Howdy,
This is a work in progress and may take some time, financially if
nothing else.  With hindsight, I wish I had done this before the price
of everything went up but some things are getting more reasonable. My
first task, a case.  At this point, I may build a new system in the new
case, or, I might build the new system in my current case.  It depends
on which case I buy.  A cube shape wouldn't work for my main system.  It
would take up to much space, it would however make a great NAS box.  My
current case is a Cooler Master HAF-932 with those huge 200mm fans.  It
has a top fan and that thing removes a lot of warm air.  A top fan
really improves heat removal.  After all, heat naturally rises.  So any
case that has a top fan gets extra points with me.
I've found a few cases that peak my interest depending on which way I go
with this.  One I found that has a lot of hard drive space and would
make a descent NAS box, the Fractal Design Node 804.  It's a cube shaped
thing but can hold a LOT of spinning rust. 10 drives plus I think space
for a SSD for the OS as well.  10, 16 or 18TB drives, is a lot of space,
even if in a RAID setup.  I might add, the price ain't bad either,
cheaper than some full tower type cases.  It also has space for 10
fans.  That includes several top ones. The downside, only micro ATX and
mini ITX mobo.  This is a serious down vote here.  I was hoping to turn
my current rig into a NAS.  The mobo and such parts.  This won't be a
option with this case.  Otherwise, it gives ideas on what I'm looking
for.  And not.  ;-)
[snip]

I am most likely too late but as for case I have two of these:
https://www.newegg.ca/white-ssupd-meshlicious-mini-itx/p/2AM-030R-00005

They are mesh type cases, run very cool no heat at all; planning on getting two more.

Thelma
Dale
2023-11-10 21:20:01 UTC
Permalink
Post by t***@sys-concept.com
Thelma
Post by Dale
Howdy,
This is a work in progress and may take some time, financially if
nothing else.  With hindsight, I wish I had done this before the price
of everything went up but some things are getting more reasonable. My
first task, a case.  At this point, I may build a new system in the new
case, or, I might build the new system in my current case.  It depends
on which case I buy.  A cube shape wouldn't work for my main system.  It
would take up to much space, it would however make a great NAS box.  My
current case is a Cooler Master HAF-932 with those huge 200mm fans.  It
has a top fan and that thing removes a lot of warm air.  A top fan
really improves heat removal.  After all, heat naturally rises.  So any
case that has a top fan gets extra points with me.
I've found a few cases that peak my interest depending on which way I go
with this.  One I found that has a lot of hard drive space and would
make a descent NAS box, the Fractal Design Node 804.  It's a cube shaped
thing but can hold a LOT of spinning rust. 10 drives plus I think space
for a SSD for the OS as well.  10, 16 or 18TB drives, is a lot of space,
even if in a RAID setup.  I might add, the price ain't bad either,
cheaper than some full tower type cases.  It also has space for 10
fans.  That includes several top ones. The downside, only micro ATX and
mini ITX mobo.  This is a serious down vote here.  I was hoping to turn
my current rig into a NAS.  The mobo and such parts.  This won't be a
option with this case.  Otherwise, it gives ideas on what I'm looking
for.  And not.  ;-)
[snip]
https://www.newegg.ca/white-ssupd-meshlicious-mini-itx/p/2AM-030R-00005
They are mesh type cases, run very cool no heat at all; planning on getting two more.
Thelma
The case I got holds I think 18 3.5" hard drives.  Eventually, I'll run
out there if I keep downloading stuff.  I got plenty of entertainment
tho.  lol  The case you linked to wouldn't even be a start.  It does
look nice tho.  I bet that mesh does allow a lot of air to flow even
without fans. 

Dale

:-)  :-) 
William Kenworthy
2023-11-11 03:50:01 UTC
Permalink
Post by Dale
Post by t***@sys-concept.com
Thelma
Post by Dale
Howdy,
This is a work in progress and may take some time, financially if
nothing else.  With hindsight, I wish I had done this before the price
of everything went up but some things are getting more reasonable. My
first task, a case.  At this point, I may build a new system in the new
case, or, I might build the new system in my current case.  It depends
on which case I buy.  A cube shape wouldn't work for my main system.  It
would take up to much space, it would however make a great NAS box.  My
current case is a Cooler Master HAF-932 with those huge 200mm fans.  It
has a top fan and that thing removes a lot of warm air.  A top fan
really improves heat removal.  After all, heat naturally rises.  So any
case that has a top fan gets extra points with me.
I've found a few cases that peak my interest depending on which way I go
with this.  One I found that has a lot of hard drive space and would
make a descent NAS box, the Fractal Design Node 804.  It's a cube shaped
thing but can hold a LOT of spinning rust. 10 drives plus I think space
for a SSD for the OS as well.  10, 16 or 18TB drives, is a lot of space,
even if in a RAID setup.  I might add, the price ain't bad either,
cheaper than some full tower type cases.  It also has space for 10
fans.  That includes several top ones. The downside, only micro ATX and
mini ITX mobo.  This is a serious down vote here.  I was hoping to turn
my current rig into a NAS.  The mobo and such parts.  This won't be a
option with this case.  Otherwise, it gives ideas on what I'm looking
for.  And not.  ;-)
[snip]
https://www.newegg.ca/white-ssupd-meshlicious-mini-itx/p/2AM-030R-00005
They are mesh type cases, run very cool no heat at all; planning on getting two more.
Thelma
The case I got holds I think 18 3.5" hard drives.  Eventually, I'll run
out there if I keep downloading stuff.  I got plenty of entertainment
tho.  lol  The case you linked to wouldn't even be a start.  It does
look nice tho.  I bet that mesh does allow a lot of air to flow even
without fans.
Dale
:-)  :-)
If your handy with some tools, you could do one of these ...
"https://www.backblaze.com/blog/open-source-data-storage-server/"

BillK
Dale
2023-11-11 07:30:01 UTC
Permalink
Post by William Kenworthy
Post by Dale
Post by t***@sys-concept.com
Thelma
Post by Dale
Howdy,
This is a work in progress and may take some time, financially if
nothing else.  With hindsight, I wish I had done this before the price
of everything went up but some things are getting more reasonable. My
first task, a case.  At this point, I may build a new system in the new
case, or, I might build the new system in my current case.  It depends
on which case I buy.  A cube shape wouldn't work for my main system.  It
would take up to much space, it would however make a great NAS box.  My
current case is a Cooler Master HAF-932 with those huge 200mm fans.  It
has a top fan and that thing removes a lot of warm air.  A top fan
really improves heat removal.  After all, heat naturally rises.  So any
case that has a top fan gets extra points with me.
I've found a few cases that peak my interest depending on which way I go
with this.  One I found that has a lot of hard drive space and would
make a descent NAS box, the Fractal Design Node 804.  It's a cube shaped
thing but can hold a LOT of spinning rust. 10 drives plus I think space
for a SSD for the OS as well.  10, 16 or 18TB drives, is a lot of space,
even if in a RAID setup.  I might add, the price ain't bad either,
cheaper than some full tower type cases.  It also has space for 10
fans.  That includes several top ones. The downside, only micro ATX and
mini ITX mobo.  This is a serious down vote here.  I was hoping to turn
my current rig into a NAS.  The mobo and such parts.  This won't be a
option with this case.  Otherwise, it gives ideas on what I'm looking
for.  And not.  ;-)
[snip]
https://www.newegg.ca/white-ssupd-meshlicious-mini-itx/p/2AM-030R-00005
They are mesh type cases, run very cool no heat at all; planning on getting two more.
Thelma
The case I got holds I think 18 3.5" hard drives.  Eventually, I'll run
out there if I keep downloading stuff.  I got plenty of entertainment
tho.  lol  The case you linked to wouldn't even be a start.  It does
look nice tho.  I bet that mesh does allow a lot of air to flow even
without fans.
Dale
:-)  :-)
If your handy with some tools, you could do one of these ...
"https://www.backblaze.com/blog/open-source-data-storage-server/"
BillK
Now imagine using 16, 18 or 20TB drives instead of 4TB ones in that
thing???  That is really, really nice tho.  I've thought of building a
rig on a piece of plywood.  Then I can add hard drives to my hearts
content by just adding some sort of drive cages, my own making if
needed.  I could take a 2 foot by 4 foot piece of plywood and have room
for TONS of drives.  I've seen several cages that hold 4 or 5 drives
each.  The mobo, power supply and such would only take a small amount of
space really.  Lots of room to mount drive cages.  Cooling wouldn't be a
real issue either since everything is wide open.

I'm thinking of building a small storage building, maybe a 8 foot by 12
foot to put my deep freezer and my backup system in, along with some
other electronic stuff.  I could have it far enough away that if
something happened to my main house, it wouldn't affect where my main
system is.  After finding that ethernet to fiber converter thingy, that
helped.  I could have a small A/C and heater to keep the temps
reasonable.  I would build it well insulated of course. 

Well, at least the water heater is done with.  Shouldn't have to worry
about it for 15 or 20 years.  May be the last one I buy.  o-O 

Oh, my 770T NAS box setup was acting weird.  Sometimes it wouldn't come
up right.  It acted like something was not posting correct at power up. 
I bought a 4 pack of Nvidia NVS 510 with 4 outputs.  I replaced the
video card that was on there and after a couple boot ups, it seems to be
working better.  I guess that old card was a bit flakey.  I hope ti
stays that way. 

Dale

:-)  :-) 

Loading...