Fibre Channel Storage? 119
Dave Robertson asks: "Fibre channel storage has been filtering down from the rarefied heights of big business and is now beginning to be a sensible option for smaller enterprises and institutions. An illuminating example of this is Apple's Xserve Raid which has set a new low price point for this type of storage - with some compromises, naturally. Fibre channel switches and host bus adapters have also fallen in price but generally, storage arrays such as those from Infortrend or EMC are still aimed at the medium to high-end enterprise market and are priced accordingly. These units are expensive in part because they aim to have very high availability and are therefore well-engineered and provide dual redundant everything." This brings us to the question: Is it possible to build your own Fibre Channnel storage array?
"In some alternative markets - education for example - I see a need for server storage systems with very high transaction rates (I/Os per second) and the flexibility of FC, but without the need for very high availability and without the ability to pay enterprise prices. The Xserve Raid comes close to meeting the need but its major design compromise is to use ATA drives, thus losing the high I/O rate of FC drives.
I'm considering building my own experimental fibre channel storage unit. Disks are available from Seagate, and SCA to FC T-card adapters are also available. A hardware raid controller would also be nice.
Before launching into the project, I'd like to cast the net out and solicit the experiences and advice of anyone who has tried this. It should be relatively easy to create a single-drive unit similar to the Apcon TestDrive or a JBOD, but a RAID array may be more difficult. The design goals are to achieve a high I/O rate (we'll use postmark to measure this) in a fibre channel environment at the lowest possible price. We're prepared to compromise on availability and 'enterprise management features'. We'd like to use off the shelf components as far as possible.
Seagate has a good fibre channel primer, if you need to refresh your memory."
why "build" your own array? (Score:5, Interesting)
My friend, recursive green, has three A5200s in his basement right now, one stores his *ahem* photo collection and is web accessible.
I think new(er) fibre things are getting cheaper, but what was often high-end data-center-only big-$$ of a few years ago hits the price point of "at home" now.
Re:why "build" your own array? (Score:2, Funny)
That's going a long way to protect one's pr0n stash.
Re:why "build" your own array? (Score:1, Funny)
Now that's what I call hardcore porn.
Re:why "build" your own array? (Score:3, Interesting)
Re:why "build" your own array? (Score:2)
Re:why "build" your own array? (Score:1)
This produced by the SAME Veritas company?? (Score:2)
Symantec is a financially powerful corp. They DID buy Veritas... unless this is a different Veritas corp.
~D
Re:This produced by the SAME Veritas company?? (Score:2)
Yep, and the minute that deal went through I stopped recommending any/all Veritas' products!
Re:why "build" your own array? (Score:3, Interesting)
Re:why "build" your own array? (Score:5, Informative)
The Veritas cluster file system (which is the reason I'd imagine someone would go through all the effort) has the ability for multiple systems to access a single volume at the same time, the moral equilivant of NFS, but without the NFS server or the speed problems associated with NFS due to the filesystem abstraction (ie it's good for databases).
The only Free competitor that I know of for this is GFS.
ZFS is a very powerful filesystem/volume manager, but it's more akin to LVM + very smart filesystem access.
Re:why "build" your own array? (Score:2)
What about OCFS [oracle.com] and OCFS2 [oracle.com]?
Re:why "build" your own array? (Score:3, Informative)
Re:why "build" your own array? (Score:2)
Re:why "build" your own array? (Score:2)
Re:why "build" your own array? (Score:1)
Re:why "build" your own array? (Score:2)
Re:why "build" your own array? (Score:1)
Re:why "build" your own array? (Score:5, Funny)
I call bs on this. I demand you post the IP address of the said server so I can verify your claims.
Re:why "build" your own array? (Score:3, Funny)
Re:why "build" your own array? (Score:1)
Re:why "build" your own array? (Score:2)
Re:why "build" your own array? (Score:3, Interesting)
I need something that will do raid5 in hardware, and show up to the OS as a single device, just like the A1000 does... I considered an A5200 but i was told i`d need to use software raid on it.
Re:why "build" your own array? (Score:2)
Re:why "build" your own array? (Score:2)
Also software raid makes it more difficult to put my root filesystem (and kernel, bootloader etc) on there.
Finally, software raid isn't likely to hotswap as nicely as hardware raid does.
Re:why "build" your own array? (Score:2)
Re:why "build" your own array? (Score:2)
Even better is to check AnySystem.com [anysystem.com] for your needs. Their everyday prices are excellent, and their EBay prices are even better! (Often you can get an 8 way system loaded with fibre channel drives and GIGs of RAM for $2000-$3000.) I don't have any affiliation with them other than trying to get my boss to replace our expensive Windows servers with AnySystem servers.
Has anyone else used these guys?
Re:why "build" your own array? (Score:3, Funny)
Yep, thats how the best ones come.
Re:why "build" your own array? (Score:5, Interesting)
So I exactly that, went on ebay and bought a pair of photons. Only 5100's, but 28 drives was pretty nice.
I was pretty undewhelmed. They were a steal when I got them (well, a "good" price when you factor shipping), but the performance was never there even with really good 10k6 cheetah's. RAID never helped, no matter how it was configured. It just didnt seem that useful.
Plus the A5200's weigh 125lbs and hauling them between dorm rooms proved less than fun.
And even locked my basement closet I could hear the roar of two A5100's. I'd been "meaning" to get rid of them for a while, but now that I'm changing states... it was finally time. I sold em on craigslist for $280 for both. Same I bought em for, and taht includes shipping.
I dunno, If I were anyone with a brain, I'd wait another year for SAS go to ape-shit on everyone. The enclosure-hostcontroller system is a smart breakdown that'll really help beat away the single-vendor-solution... the reason everyone can charge so much for hw now is that everything is one unit, the enclosures, the controller, its a big package with a nice margin. when XYZ company can come along and sell you a 24 drive enclosure for pennies that you can plug in to a retail SAS controller... its a game changer. Just watch the rediculous margins drop.
If you need something now, just get SATA raid. Intel's new IO processor is amazing, it'll give you really nice performance. But otherwise, I'd say wait for SAS. I suppose its still more expensive than a pair of A5100's, but I'd wager the performance will be better.
As a side note, I sometimes wonder whether the fibre cabling i bought was bad. I really couldnt sustain more than 40 MB/s even doing XFS linear copies, even with 14 drives dedicated to the task. I'm not sure if bad cabling would've given me some kind of overt error, or might have just quietly degraded my performance.
Myren
Re:why "build" your own array? (Score:1, Informative)
40MB/s seems awfully close to a saturated SCSI bus of that era. E.g., Sun Ultra workstations have 40MB/s SCSI busses in them. Probably the way to boost performance is to have multiple arrays, each using a dedicated controller in the workstation/server (ensure controllers are on separate PCI busses, too) and the mirror set up across the controllers (gee, three controllers, three arrays, say five disks per array...I'll let you cover the
Re:why "build" your own array? (Score:2)
Re:why "build" your own array? (Score:1)
Re:why "build" your own array? (Score:1)
Re:why "build" your own array? (Score:1)
speedwise (Score:1)
Re:speedwise (Score:3, Informative)
Re:speedwise (Score:1)
YMMV and is highly dependent on the hardware vendor.
Re:speedwise (Score:2)
Re:speedwise (Score:2)
Re:speedwise (Score:1)
My memory recalls (Score:5, Informative)
Transactions make significant use of CPU resources in ATA based systems. The only cards I am aware of that move ATA transactions to hardware come from Advansys and Adaptec, both use SCSI chipsets but ATA translators. Available for about 190.00 each but hard to find.
On a PII 450 the chipset usage on a softraid was almost 40% when at full throttle with 3 P/ATA drives. Same system went down a tidbit using a PCI card (not sure why actually, presumably some operations are offboarded by the PCI card thus reducing the overhead, but don't quote me on it).
The main problem with all of this is cost vs redundancy vs speed... which brings us to the old issue we found with Cars. the FAST, GOOD, CHEAP pyramid.
FAST
/ \
GOOD-----CHEAP
CHEAP + FAST != GOOD
GOOD + CHEAP != FAST
FAST + GOOD != CHEAP
My suggestion since my own buying experience with FC is limited (read as NONEXISTANT) I can only tell you what I've learned on SCSI vs PATA/SATA rigs.
SATA systems that do not use an Adaptec chipset are usually mere translators that make use of CPU resources to monitor drive activities. 3Ware cards seem to transcend this limitation rather well and they provide a fine hardware RAID setup for ATA drives.
PATA systems that do not use an Adaptec chipset are likewise mere translators.
SCSI interfaces make use of a hardware chipset that monitors and controls each drive... thus relieving the CPU of being abused by drive intensive operations (think of a high provile FTP or CVS/Subversion server and you get the idea of what would happen to the CPU if it was forced to perform the duties of server processor AND software RAID monitor).
Onto bandwidth. Serial ATA setups suffer from a similar issue I've found with almost all setups. Without separate controllers, SATA setups and PATA setups split up the total bandwidth to the maximum number of active drives. More recent controllers available via SATA may have fixed this issue, but in general, I've found that my systems slowed down in data transfer rate when using drives or arrays found on the same card. SCSI seems to bypass this completely by providing each drive with a dedicated pipeline, but I am not sure what the set ammount is, there is an issue however, as some of the older chipsets DO have some issues when handling a full set of 15 drives. The primer on the article link at Seagate is on the money in their LVD vs FC link/FAQ. You will most likely require a backplane and full setup, and it probably won't be cheap. The big difference will be that FC setups are usually bulletproof, and can support MASSIVE amounts of drives (128 or 125 I think...), with the only thing being faster being a VERY high end Solid State drive or array. Those also have a very small amount of latency, but as far as speed, setup a good ram drive on a dedicated memory bank with backup battery and EMP shielding... okay, that's way expensive
~D
PS - I understand that I've not answered your question but I should've simplified this for everyone else out there.
Re:My memory recalls (Score:2)
Re:My memory recalls (Score:2)
~D
Re:speedwise (Score:3, Informative)
When I worked at an FC startup, FC was 4Gbps full duplex, and 10Gbps was in the works.
eSATA (Score:1, Interesting)
Re:eSATA (Score:2)
Re:eSATA (Score:2)
Honestly, it seems like that company's products are only popular because they got a post on Slashdot. Not sure why I'd ever want one of their boxes, as the whole concept just feels "wrong". Couldn't they just have put that energy into something like low-cost iSCSI or FC boxes? (using SATA drives, of course)
Re:eSATA (Score:4, Interesting)
I used to be really against iSCSI, as the native stacks on various OSes just did not deal with it well. By that I mean that a 50 MB/s file transfer would consume almost 100% of a 3ghz CPU. Also, the hard limit on gig-e transfers of 85 MB/s (TCP/IP overhead + iSCSI overhead) was just too low.
Now, that has all changed. Not only can you get TCP/IP Offload Engines for just about every OS (I don't work with Windows, so I don't know what the status of that is). Also, 10 gigabit ethernet has become financially reasonable.
For instance, the T210-cx [chelsio.com] is around $800, and will deliver a sustained 600 MB/s (not peak or any other crap). Also, the latency on a 1500 MTU 10-gbs ethernet fabric is something to behold.
I think by the end of this year, we will see iSCSI devices on 10gbe that out-perform traditional SAN equipment in the 2gbs evironment, in every respect (including price), by a large margin. 4gbs SAN could come close, but I still think hardware accelerated iSCSI has a _ton_ of potential.
If I were starting a storage company today, I would be focusing exclusively on the 10gbs iSCSI market. It is going to explode this year.
Re:eSATA (Score:1)
Re:eSATA (Score:2)
Try areca (Score:3, Interesting)
http://www.areca.com.tw/products/html/fibre-sata.
Re:Try areca (Score:2, Interesting)
-Fran
Re:Try areca (Score:2)
I have 6 Maxtor MaxLine Pro 500GB SATA-II drives connected to it and configured it in a hardware raid6 config. Writes are fairly slow (~32MB/s) while reads are not too shabby (200MB/s). The card does have some really nice features, cli and http tools for
alternative to FC (Score:4, Informative)
Re:alternative to FC (Score:2)
Fibre channel is its own world, and there's no real carry-over from IP networking skills. No software anaylzer. No 'ping' command. Of course, if i all works when you plug it together, then no worries!
Re:alternative to FC (Score:1)
Also the system tends to tell you sooner rather later about issues on your fabric or loop.
XServe RAID not fast enough? (Score:5, Informative)
Speaking as the owner of two XServe RAID devices (5TB and 7TB models) as well as several other Fibre Channel devices I can say that the Apple Fibre Channel is by no means slow. Each SATA drive has pretty much equal performance to the SCSI drives we use in our Dell head node. Combined together there are times where we can pull several hundred megs a second off the XRAID's. Plus our XRAID has been fairly immune to failures thusfar. I have yanked drives out of it and it just keeps right on going.
Another little hint, if you are really worried about speed you can just install large high RPM sata drives yourself. Its not that hard to do at all.
Check out Alien Raid [alienraid.org] for more information.
Re:XServe RAID not fast enough? (Score:3, Informative)
SCSI Drives
1000G (r
Re:XServe RAID not fast enough? (Score:2)
Because they get... (Score:2)
10k rpm (not 15000 mind you) Seagate and Maxtor. Cheapest entries on pricewatch both 495 plus 5 or so shipping UPS ground.
SCSI U320 (nice) and of course they're 300 gigs. GREAT!
Okay now here's the kicker. ALL the other suppliers provide them at 640.00 USD or more. You save some, but the downside I wager is that they won't carry enough to build a large RAID 5 setup. Also, the smaller drives are faster in a large array bec
Some corrections (Score:2)
Second, 36GB is hardly the top end for SCSI drives. For a little more than $300 you can get 147GB SCSI drives [newegg.com]. Also keep in mind that more drives means more striped performance. So more drives isn't necessarily a bad thing.
Granted, it is still more expensive for teh SCSI setup. I just think you should make a fair comparison.
-matthew
Re:Some corrections (Score:2)
Compromises aplenty when building RAID arrays. Performance, heat, power usage, noise, cost... etc.
Re:XServe RAID not fast enough? (Score:3, Informative)
Just to point out...the XServe RAID uses Ultra ATA drives, *NOT* SATA drives. I spent the last month or two researching RAID arrays, and that was one of the most disappointing things I saw...
Xserve RAID features a breakthrough Apple-designed architecture that combines affordable, high-capacity Ultra ATA drive technology with an industry standard 2Gb Fibre Channel interface for reliable... from Apple [apple.com].
Re:XServe RAID not fast enough? (Score:3, Informative)
I would so love to prove you wrong right now. Turns out you are completly correct. both of our XServe RAID devices use 7200RPM Ultra IDE Hitachi drives. This invalidates at least some of our benchmarking as it was done single drive on our XServe systems (what where supposed to be like drives but are serial ATA drives.) all of our single drive benchmarks are invalid then (or rather, are meaningless to this discussion =) However, the XServe RAID still performed very well when doing a 5 disk vs 5 disk RAID set
Re:XServe RAID not fast enough? (Score:1)
Re:XServe RAID not fast enough? (Score:1)
Where I work, we use the 4000-series switches to get 192 10/100 ports available where performa
Re:XServe RAID not fast enough? (Score:2)
Re:XServe RAID not fast enough? (Score:1)
Re:XServe RAID not fast enough? (Score:2)
Yes, but what I was saying is that blanket excluding drives because they use a SATA->FC converter is limiting the project and not gaining anything. We benchmarked our drives against SCSI drives and other fibre channel solutions. The SATA drives in the XServe RAID kicked the 10K RPM SCSI drives around the block. The 15K SCSI drives did better.. but not "drasticly better" when compared 5 disk array vs 5 disk array.
So my question is this: why limit yourself? Did you look at the "SATA" specifications on th
Infortrend (Score:2)
Re:Infortrend (Score:1)
When I worked at UUNET (1997-2000) we had hundreds of servers with infortrend-based arrays as their storage.
I've had reasonable service, great pricing, and bad support from CAEN Engineering, one of their resellers. I have heard good things about Western Scientific.
Re:Infortrend (Score:1)
My company is a time honoured user of Infortend Arrays - supported in the UK by European Raid Arrays (e-raid.co.uk).
A good number of the Soho Film industry in London runs on Infortrend RAIDs supplied by ERA.
We've never lost any data on the RAID as long as I've been here (5yrs+), had minimal drive failures and - I think - two PSU failures.
We have many TB of RAID and numerous RAIDs (FC, SCSI and SATA).
But then, we do keep
Has been done a while back... (Score:1)
http://web.archive.org/web/20010406133220/www.tcn
Why FC? (Score:1)
In your question, you said you don't want a whole lot of redundancy or high availability. You can do nearly the same with an inexpensive computer with a large raid, gigabit ethernet, and NFS or Samba.
If there's money riding on this (i.e. you will lose big money each second the connection is down), then you need FC and service contra
Look at Storcase (Score:2)
All FC RAID is going to be high-availibility (Score:5, Informative)
1) High numbers of transactions per second. Your focus here is going to be on units that can hold a LOT of drives (not necessarily of high capacity). You want as many sets of drive heads as possible going here. In addition, SATA drives are not made to handle high duty-cycles of high transaction rates. The voice coils have insufficient heat dispersion. (They are just plain cheaper drives.) High transaction rates require a pretty expensive controller, and you won't be able to avoid redundancy, but that isn't a problem, since you are going to need both controllers to support the IOPS. (I/O's per second.)
2) High raw bandwidth. If you need raw bandwidth and your data load is non-cacheable, then really software RAID + a JBOD may be able to get the job done here, if you have a multi-CPU box so one CPU can do the RAID-ing. Again, two controllers are usually going to provide you with the best bandwidth. SATA striped across a sufficient number of drives can give you fairly decent I/O, but not as good as FC.
There are "low end" arrays available that will offer reduced redundancy. The IBM DS 400 is an example. This box is OEM'd from Adaptec, and pretty much uses a ServRAID adapter as the "guts" of the box. This unit uses FC on the host side, and SCSI drives on the disk side. It is available in single and dual controller models. (Obligatory note: I do work for IBM, but I am not a sales drone.) This setup has the distinct advantage of being fully supportable by your vendor. A homegrown box will not have that important advantage.
Don't be scared away by the list price, as nobody ever pays it. Contact your local IBM branch office (or IBM business partner/reseller), and they can hook you up.
This unit is also available as part of an "IBM SAN Starter Kit" which gives you a couple of bargain-barrel FC switches, four HBA's, and one of these arrays w/
SirWired
Re:All FC RAID is going to be high-availibility (Score:2)
Obviously, Postmark is a transaction rate/metadata benchmark.
It's a question of values with many trade-offs (Score:5, Insightful)
Case One
Let's say this array is to be used for a single application that needs lots of pull and is populated initially from other sources, with a low delta of updates. In other words, largely reads vs writes. Cacheing may help; and if so, then you can tune the app (and the OS) to get fairly good performance from SATA RAID or through FC JBODs when in a Raid 0 or 5 configuration. (there is no Raid 0; its just a striped array without redundancy/availability and is therefore a misnomer)
Case Two
Maybe you need a more generalized SAN, as it will be hit by a number of machines with a number of apps. You'll need better controller logic. You'll likely initially need a SAN that has a single SCSI LUN appearance, where you can log on to the SAN via IP for external control of the can that stores the drives (and controls the RAID level, and so on). This is how the early Xserve RAID worked, and how many small SAN subsystems work. Here, the I/O blocks/problems come at different places-- mostly at the LUN when the drive is being hit by multiple requests from different apps connected via (hopefully) an FC non-blocking switch (Think an old eBay-purchased Brocade Silkworm, etc). SCSI won't necessarily help you much.... and a SATA array has the same LUN point block. Contention is the problem here; delivery is a secondary issue unless you're looking for superlative performance with calculated streams.
Case Three
Maybe you're streaming or rendering and need concurrent paths in an isochronous arrangement with low latency but fairly low data rates-- just many of them concurrently. Studio editing; rendering farms, etc. Here's where a fat server connecting a resilient array works well. Consider a server that uses a fast, cached, PCI-X controller connected to a fat line of JBOD arrays. The server costs a few bucks, as does the controller, but the JBOD cans and drives are fairly inexpensive and can be high-duration/streaming devices. You need to have a server whose PCI-X array isn't somehow trampled by a slow, non-PCI-X GBE controller as non-PCI-X devices will slow down the bus. You also get the flexibility of hanging additional items off the FC buses, then adding FC switches as the need arises. At some point, the server becomes more useless in cache and becomes its own botleneck-- but you'll have proven your point and will have what now amounts to a real SAN with real switches and real devices.
The SATA vs SCSI argument is somewhat moot. Unless you cache the SATA drives, they're simply 2/3rd the possible speed (at best) of a high-RPM SCSI/FC drive. It's that simple. uSATA will come one day, then uSATA/hi-RPM..... and they'll catch up until the 30Krpm SCSI drives appear.... with higher density platters....and the cost will shrink some more.
I've been doing this since a 5MB hard drive was a big deal. SCSI drives will continue to lead SATA for a while, but SATA will eventually catch up. In the mean time, watch the specs and don't be afraid of configuring your own JBOD. And if you want someone to yell at, the Xserve RAID is as good as the next one.... except that it has the Apple Sex Appeal that seems a bit much on a device that I want to hide in a rack in another building.
Re:It's a question of values with many trade-offs (Score:1)
Try AoE instead (Score:3, Insightful)
A GigE switch is cheap, and a GigE port is easy to add, or you can use the existing one on a system. AoE sits down below the IP stack so there is little overhead for comm, and it looks like a SATA drive in most ways. The primary vendor's appliance (www.coraid.com) will take a rack full of SATA and make it look like one drive via various RAID configs.
Yeah FC is faster, but how many drives are going to be talking at once? Are you really going to fill the GigE and need a FC to alleviate the bottleneck? If you are then FC is probably not the right solution for you anyway.
Your mileage may vary, but I expect anyone will get comparable results for the price, and many will get excellent results overall.
Re:Try AoE instead (Score:1)
Of course, latency is still probably going to be an issue running on 'commodity' hardware - remember that, even though LAN connectivity works well for many things, it's designed for the general case and doesn't work perfectly for everything. That's where Inf
Advice from a SAN lab manager (Score:5, Informative)
0. Get everything off of ebay.
1. Stick with 1GB speed equipment. It's older, but an order of magnitude less expensive.
2. Avoid optical connections if you can -- for a small configuration, copper is just fine and often a lot less expensive. Fibre is good for long distance hauls and >1gb speed.
3. Pick up a server board with 64bit pci slots, preferably at 66mhz.
4. Buy a couple of qlogic 2200/66's. These are solid cards, and are trivial to flash to sun's native leadville fcode if you desire to use a sparc system and the native fibre tools. They also work well on linux/x86 and linux/sparc64. These should run about $25 each.
5. Don't buy a raid enclosure. Get a fibre jbod. You can always reconstruct a software raid set if your host explodes if you write down the information. If you blow a raid controller, you're screwed. Besides, you won't want to pay for a good hardware raid solution, and I have yet to see a card-based raid; they're always integrated into the array. I recommend a Clariion DAE/R. Make sure you get one with the sleds. These have db-9 copper connections and empty should run about $200. Buy 2 or 4 of these, depending on how many hbas you have. They'll often come with some old barracuda 9's. Trash those; they're pretty worthless.
6. Fill the enclosures with seagate fc disks. If you're not after maximum size, the 9gb and 18gb cheetahs are cheap, usually like $10 a pop on ebay and are often offered in large lots. They are so inexpensive it's hard to pass them up. Try to get ST3xxxFC series, but do NOT buy ST3xxxFCV. The V indicates a larger cache, but also a different block format for some EMC controllers. They are a bitch to get normal disk firmware on.
7. Run a link from each enclosure to each hba. Say you have 2 enclosures with 10 disks each. Simple enough; and 1gb second up and down on each link.
8. Use linux software raid to make a bigass stripe across all the disks in one enclosure, repeat on the second enclosure, and make a raid10 out of the two. Tuning the stripe size will depend on the application; 32k is a good starting point.
With that setup, you should pretty much max out the bandwidth of a single 1gb link on each enclosure, and enjoy both performance and reduncancy with the software raid, and not have to worry about any raid controllers crapping out on you.
You should be able to get two enclosures, 20 disks, a couple of copper interconnects and some older hbas for about $750 to $1,000 depending on ebay and shipping costs.
This should net you some pretty damn reasonable disk performance for random access type io. This is NOT the right approach if you're lookign for large amounts of storage. You'll get the raid10 redundancy in my example, but if you want real redundancy (and i mean performance-wise, not just availability -- you can drive a fucking truck over a symmetrix, then saw it in half, and you probably won't notice a blip in the PERFORMANCE of the array--something fidelity and whatnot tends to like) you have to pay big money for it. The huge arrays are more about not ever quivvering in their performance no matter what fails.
Hope this was of some use.
Re:Advice from a SAN lab manager (Score:3, Funny)
Re:Advice from a SAN lab manager (Score:1)
Re:Advice from a SAN lab manager (Score:1)
Second, Yes, I did pay $750 for 90gb of storage with 1gbit write bandwidth. It is highly redundant, and has effectively 10 spindles in the stripe; with what the original poster was implying (lots of random IO) this would be quite high-performance with the data striped across 10 drives with t
Re:Advice from a SAN lab manager (Score:2)
My setup consumed about a continuous 800W, not to mention any increase in air conditioning usage that resulted. I've since moved to a 4-drive 3ware setup, which is slower on raw reads (marginally, but not by enough for me to care), has abo
Re:Advice from a SAN lab manager (Score:2)
That's RAID 0+1, not RAID 10. Under Solaris Disksuite, it looks like RAID 0+1 but actually behaves like RAID 10; be sure of what you're actually doing on your setup.
For those not aware of the difference, RAID 0+1 is two stripes mirrored; if you lose a disk in one st
How about iSCSI? (Score:3, Insightful)
Re:How about iSCSI? (Score:2)
Heres a little write up on it: http://linuxdevices.com/news/NS3189760067.html [linuxdevices.com]
It mentions how you can use the ata over ethernet in combination with iSCSI. The ATAoE protocol has much less overhead than iSCSI, because ATAoE is not using TCPIP, rather it is its own non-routable protocol to be used for local storage using the ethernet hardware. It is ex
HP (Score:1)
Use commodity hardware (Score:2)
My suggestion to the OP is that if he wants to achieve a high I/O rate at the lowest possible price, then the solution if of course to use a Google-like solution: use commodity hardware.
For example buy a lot of ATA/SATA harddisks (in order to spread the load over them), use them inside ATA-over-Ethernet enclosures (www.coraid.com, Linux driver available in any 2.6 vanilla kernel), and connect them with multiple Gigabit Ethernet links to the storage server. And the best part in all of this: ATA/SATA is s
The question makes no sense (Score:2)
I'd recommend more SATA drives for the same price over fewer, more expensive FC drives. The differences in RAID controllers and number of drives has much more impact in array performance than interface technology. Since FC controllers and drives are more expensive, it's a disadvantage when you are tring to get high speed on a budget.
Fibre channel storage has been filtering down from the rarefied heights of big
Re:The question makes no sense (Score:2)
True, the IOMeter performance of the drive revewed, and most SATA drives under deep queues isn't as good as the more expensive SCSI/FC drives out there but looking at this fact in isolation gives a skewed picture.
Most servers operate with a queue depth of 1 most of the time and that's especially true in a small office. If
But, how do you crate your own RAID controller... (Score:1)
I want to figure out how to do this with a Linux box. How cou
I run a SAN... (Score:5, Informative)
For a single server, a FC system seems like overkill to me. Buy a direct attached SCSI enclosure and be done with it.
For 10 or more servers, sharing disk space, a SAN (FC IMHO, although iSCSI is acceptible if your servers all share the same security requirements - I.E. are all on the same port of your firewall) is the way to go.
Here's what I see the benefits of a FC SAN as (if you don't need these benefits, you'll waste your money on the SAN if you buy it):
1) High availability
2) Good fault monitoring capability (me and my vendor both get paged if anything goes down, even as simple as a disk reporting errors)
3) Good reporting capability. I can tell you how many transactions a second I process on which spindles, find sources of contention, know my peak disk activity times, etc.
4) Typically good support by the vendor (when one of the two redundant storage processors simply *rebooted* unexepectedly, rather than my vendor saying, "Ah, that's a fluke, we're not going to do anything about it unless it comes back in again", they had a new storage processor to me within one hour)
5) Can be connected to a large number of servers
6) Good ones have good security systems (so I can allow servers 1 & 2 to access virtual disk 1, server 3 to access virtual disk 2, with no server seeing other servers' disks)
7) Ease of adding disks. I can easily add 10 times the current capacity to the array with no downtime.
8) LAN-free backups. You can block-copy data between the SAN and tape unit without ever touching the network.
9) Multi-site support. You can run fiber channel a very long way, between buildings, sites, etc.
10) Ability to snapshot and copy data. I can copy data from one storage system to antoher connected over the same FC fabric with a few mouse clicks. I can instantly take a snapshot of the data (for instance, prior to installing a Windows service pack or when forensic analysis is required) without the hosts even knowing I did it.
Note that "large amounts of space" and "speed" aren't in the 10 things I thought of above. Really, that's secondary for most of my apps, even large databases, as in real use I'm not running into speed issues (nor would I on direct attached disks, I suspect). It's about a whole lot more than speed and space.
I have built my own Fibre Channel array. (Score:4, Interesting)
I located the array in the basement, and the computer was in my office. I had wonderful performance and no disk noise, which was quite nice...
If you want photos, take a look here [nuxx.net].
Also, while I sold off the rest of the kit, I've got the HSSDC DB9 cables left over. While they tend to go for quite a bit new (they are custom AMP cables) I'd be apt to sell them for cheap if another Slashdotter wants to do the same thing.
iSCSI/ATA over Ethernet - how/more info? (Score:2)
Re:iSCSI/ATA over Ethernet - how/more info? (Score:2)
Re:iSCSI/ATA over Ethernet - how/more info? (Score:2)
NetApp FC performs and scales somewhat better than iSCSI
o FC had 11% more OLTP throughput
o FC had 25% better OLTP response times
So FC-AL on 2Gbps links has only 11% more bandwidth available and 25% faster response time. While that might be important to an enterprise it also points out that iSCSI over 1Gbps ethernet is really damn fast =)
Waste of time/money. (Score:2)