Which should be counter-balanced with a healthy understanding of the "RAID is not a backup" concept: always be prepared for the failure of a redundant component, or for a user to delete the data on it, with good backups. It's definitely tempting to be worried about putting your sensitive data on a single device, but one needs to have some level of trust in the device's internal redundancy. You're essentially getting a device that's like a RAID controller and a bunch of storage devices behind it in one package. Does this require a different mindset, given the different interface and on-card wear-leveling/error-correction available in this form-factor?
Should I be concerned about Fusion-io failure any more than I'd be concerned about a RAID controller failure or a motherboard failure?Ī failure of the entire device would be pretty much analogous to the loss of a RAID controller or motherboard - I'd be approximately as worried about the Fusion-io card as these other single-point-of-failure components, though I don't have experience with the devices at large scale to be able to compare failure rates using hard data.ĭo I need to run two ioDrive2 cards and join them with software RAID (md or ZFS), or is that unnecessary?Īdding redundancy in addition to what the device already has (say, software RAID among multiple Fusion-io cards) would be a lot like doing software RAID between two hardware RAID groups on two different RAID controllers might be worthwhile for systems warranting extreme redundancy to remove an additional single point of failure, but not for common deployments (a 10 minute RPO on a mirror should be good enough for most applications?). The on-device redundancy should do the job just fine for failures of the flash chips - analogous to RAID among all of the components doing actual data storage. In a statistically large setup, you will have failures of drives despite the internal safeguards. Do you need absolute maximum availability despite the cost? Is the cost of a failure and possible downtime expensive? Go ahead and mirror the drives. In many setups where you would deploy Fusion-io drives, you'll have other safeguards built in (redundancy at the node level) so it doesn't make as much sense. Are you fine with a single-controller setup? Probably.
Think of a Fusion-io card as a RAID controller with disks behind it. In most of the situations where I've observed failure, it's been a firmware problem that has affected both members of a mirror so RAID would not have mattered. This isn't the kind of hardware where your disk is a single chip. The Fusion-io cards do have quite good internal redundancy. When you're talking about another $10K+ to turn on mirroring, it needs a bit more consideration. Another $500 for a drive for mirroring? Totally worth the cost without even considering it. Historically, we've always RAIDed everything since the cost of doing so has been negligible. Ultimately, it comes down to your failure model.
There is asynchronous replication with a 10-minute RPO that mirrors transaction logs to a second physical server. This is a single server setup with no real high-availability option.
I plan to use the HP-branded Fusion-io ioDrive2 1.2TB card for a proprietary standalone database solution running on Linux. Can I run reliably with a single Fusion-io card installed in a server, or do I need to deploy two cards in a software RAID setup?įusion-io isn't very clear (almost misleading) on the topic when reviewing their marketing materials Given the cost of the cards, I'm curious how other engineers deploy them in real-world scenarios.