metadat 5 hours ago | next |

It's going to be rough without Anandtech reporting anymori wonder if a new outlet will spring up to fill the void.

https://news.ycombinator.com/item?id=41399872

Here's to hoping this PM9E1 drive makes it into the Samsung EVO 9x series drives..

I'm curious why the capacity only goes to 4TB, aren't there a bunch of 8TB NVMe's out there? When will we see consumer-grade 16TB SSDs? Capacity hasn't seemed to increase in more than half a decade.

Panzer04 3 hours ago | root | parent | next |

4TB seems like the upper end for most normal consumers, I would hazard. We had 1-2TB HDDs a decade ago, and there's been little reason to go higher in the consumer space. Arguably SSDs only now getting cheap enough at those capacities might have limited it, but even so I think we're running out of things that consume that much space.

Video and pictures are the main culprit (even in games), but 4k is likely to be the upper end of consumer usage for the forseeable future, photos have been 20-40MP for a decade, and perceivable quality benefits from going higher are fairly minimal. We can always use more space, but from a practical perspective there's not the same explosion in space required from everything else scaling to use it, I'd say.

pixl97 3 hours ago | root | parent | prev |

The question is if consumers are willing to pay the prices of the larger SSDs. I consider myself a pro-sumer and have not needed that much fast SSD myself.

jiggawatts 14 minutes ago | prev | next |

The IT industry as a whole still hasn't quite internalised that servers now have dramatically worse I/O performance than the endpoints they are serving.

For example, a project I'm working on right now is a small data warehouse (~100GB). The cloud VM it is running on provides only 5,000 IOPS with a relatively high latency (>1ms).

The laptops that pull data from it all have M.2 drives with 200K IOPS, 0.05ms latency, and gigabytes per second of read bandwidth.

It's dramatically faster to just zip up the DB, download it, and then manipulate it locally. This includes the download time!

The cheapest cloud instance that even begins to outperform local compute is about $30K/month, and would be blown out of the water by this new Samsung drive anyway. I don't know what it would cost to exceed 15GB/s read bandwidth... but I'm guessing: "Call us".

Back in the Good Old Days, PCs and laptops would have a single 5400 RPM drive with maybe 200 IOPS and servers would have a RAID at a minimum. Typically they'd have many 10K or 15K RPM drives, often with a memory or flash cache. The client-to-server performance ratio was at least 1-to-10, typically much higher. Now it's more like 10-to-1 the other way, and sometimes as bad as 1000-to-1.

Aerroon 3 hours ago | prev |

>Comparatively, we now see the Gen 5 Samsung PM9E1 achieving a whopping 14.5 GB/s read and 13 GB/s write

Isn't this comparable to DDR3 memory?

I wonder if at some point we will have GPUs extend their memory with like a raid array of SSDs.

Panzer04 3 hours ago | root | parent |

SSDs still degrade, though - Optane was mooted for something like this, but still ended up being too expensive and not good enough at either (unprofitable) in the end.

Pushing 10GB/s on an SSD with 1000TBW write endurance would kill it in ~ 100000s, or a little over a day of continuous usage - and I'd expect a GPU probably would come kind of close.