By semiconductor standards, SSDs are a bit weird.

They're made with "flash memory" chips, which, unusually for semiconductors, do actually wear out. They're usually rated in 100k erase-write cycles. On a busy SSD drive, one can do quite a lot of writes over time.

What the makers do with both the chips and the SSD devices is incorporate a percentage of spare capacity that they can switch in to replace blocks of cells that have failed or are getting tired. In practice they have to do that simply because even new devices can have faults and they need a way to work around those of they'd be binning an awful lot of product. They also try to spread the work evenly over the devices so that they don't wear out one area.

The chips also tend to be more robust if the cells are low-density and less robust if they're high density, e.g., cells per square inch.

Pick your trade-off, but remember that low density and high spare capacity cost both chip area and money.

There are clues from the various manufacturers about how much trade-off they're doing. High cost server-grade parts should(!) be significantly better that cheap high capacity. The usual story in fact, but perhaps more so with SSDs.

FWIW I think I haven't yet had an SSD fail, though I was probably also later than many to the party because I already understood what I write above.

I've long used WD "spinning rust" drives and found them very reliable. I would expect their SSDs also to be good, but I have, I think, only the one WD SSD and it's almost new.


Jazz relative beginner, starting at a much older age than was helpful.
AVL:MXE Linux; Windows 11
BIAB2025 Audiophile, a bunch of other software.
Kawai MP6, Ui24R, Focusrite Saffire Pro40 and Scarletts
.