Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware Hacking

Fibre Channel Storage? 119

Dave Robertson asks: "Fibre channel storage has been filtering down from the rarefied heights of big business and is now beginning to be a sensible option for smaller enterprises and institutions. An illuminating example of this is Apple's Xserve Raid which has set a new low price point for this type of storage - with some compromises, naturally. Fibre channel switches and host bus adapters have also fallen in price but generally, storage arrays such as those from Infortrend or EMC are still aimed at the medium to high-end enterprise market and are priced accordingly. These units are expensive in part because they aim to have very high availability and are therefore well-engineered and provide dual redundant everything." This brings us to the question: Is it possible to build your own Fibre Channnel storage array?
"In some alternative markets - education for example - I see a need for server storage systems with very high transaction rates (I/Os per second) and the flexibility of FC, but without the need for very high availability and without the ability to pay enterprise prices. The Xserve Raid comes close to meeting the need but its major design compromise is to use ATA drives, thus losing the high I/O rate of FC drives.

I'm considering building my own experimental fibre channel storage unit. Disks are available from Seagate, and SCA to FC T-card adapters are also available. A hardware raid controller would also be nice.

Before launching into the project, I'd like to cast the net out and solicit the experiences and advice of anyone who has tried this. It should be relatively easy to create a single-drive unit similar to the Apcon TestDrive or a JBOD, but a RAID array may be more difficult. The design goals are to achieve a high I/O rate (we'll use postmark to measure this) in a fibre channel environment at the lowest possible price. We're prepared to compromise on availability and 'enterprise management features'. We'd like to use off the shelf components as far as possible.

Seagate has a good fibre channel primer, if you need to refresh your memory."
This discussion has been archived. No new comments can be posted.

Fibre Channel Storage?

Comments Filter:
  • by heliocentric ( 74613 ) * on Sunday January 29, 2006 @07:11PM (#14595190) Homepage Journal
    Sun's A5200s are cheap on eBay, and you can pick up something like a 420r or a 250 to drive the thing. Put a qfe card in with the free sun trunking now for Solaris 10 and it'll serve up your files super speedy, all for very reasonable.

    My friend, recursive green, has three A5200s in his basement right now, one stores his *ahem* photo collection and is web accessible.

    I think new(er) fibre things are getting cheaper, but what was often high-end data-center-only big-$$ of a few years ago hits the price point of "at home" now.
    • by Anonymous Coward
      My friend, recursive green, has three A5200s in his basement right now, one stores his *ahem* photo collection and is web accessible.

      That's going a long way to protect one's pr0n stash.
    • by Anonymous Coward
      He stores his "home collection" on an A5200 [sun.com]?!

      Now that's what I call hardcore porn.
    • Plus you get a free Veritas license, at least on Solaris (sparc). Don't know if it works on x86.
      • Bah, I can solve that issue with an Amanda Daemon and won't have to worry about "free" veritas licenses. Especially since they (Veritas) were bought out by Symantec last year, and we've all seen what they did to l0pht heavy industries. (Remember, they became @Stake and were bought by symantec, and now SELL lophtcrack under the monicker LC5 (lophtcrack version 5) but as I understand it, the earlier versions did more back when they were free. I haven't had the guts to shell out 600 bucks or whatnot its got
      • Or, you get OpenSolaris and use ZFS on that same array. It's a filesystem and a volume manager in the same piece of software. Best of all, it's all free and I think the ZFS is open sourced too.
        • by SWroclawski ( 95770 ) <serge@wrocLIONlawski.org minus cat> on Sunday January 29, 2006 @11:44PM (#14596157) Homepage
          ZFS != Vertias Volume Manager

          The Veritas cluster file system (which is the reason I'd imagine someone would go through all the effort) has the ability for multiple systems to access a single volume at the same time, the moral equilivant of NFS, but without the NFS server or the speed problems associated with NFS due to the filesystem abstraction (ie it's good for databases).

          The only Free competitor that I know of for this is GFS.

          ZFS is a very powerful filesystem/volume manager, but it's more akin to LVM + very smart filesystem access.
        • Hrmm, I can use VxVM which has been out for 10 years and is rock solid OR I can use ZFS which Sun just released in the latest Solaris Express; it's not even shipping in a supported product!
          • All of that is true, *but* if I wanted proper enterprise class file serving, I wouldn't be using software RAID. I'd get an EMC, Sun, HDS, IBM, HP or NetAPP array and do away with the volume manager part. Having said that, VxVM would work OK, or Solstice Disksuite or this fancy new ZFS stuff, which looks very promising, but doesn't have the performance or features that a good quality array has.
    • by Monkeys!!! ( 831558 ) on Sunday January 29, 2006 @08:08PM (#14595435) Homepage
      My friend, recursive green, has three A5200s in his basement right now, one stores his *ahem* photo collection and is web accessible.

      I call bs on this. I demand you post the IP address of the said server so I can verify your claims.
    • Aren't the A5200 arrays JBODs ? Or do they do hardware raid like the A1000 does...

      I need something that will do raid5 in hardware, and show up to the OS as a single device, just like the A1000 does... I considered an A5200 but i was told i`d need to use software raid on it.
      • Have you investigated doing software RAID using Solaris 10's ZFS file system? I thought hardware RAID was the best, but the Solaris engineering team's blog postings indicate that they believe that their ZFS RAID system is more reliable than hardware RAID.
        • More reliable perhaps, but the cpu usage is likely to be a lot higher...
          Also software raid makes it more difficult to put my root filesystem (and kernel, bootloader etc) on there.
          Finally, software raid isn't likely to hotswap as nicely as hardware raid does.
    • Sun's A5200s are cheap on eBay, and you can pick up something like a 420r or a 250 to drive the thing.

      Even better is to check AnySystem.com [anysystem.com] for your needs. Their everyday prices are excellent, and their EBay prices are even better! (Often you can get an 8 way system loaded with fibre channel drives and GIGs of RAM for $2000-$3000.) I don't have any affiliation with them other than trying to get my boss to replace our expensive Windows servers with AnySystem servers. :-)

      Has anyone else used these guys?
    • by LordMyren ( 15499 ) on Monday January 30, 2006 @12:15AM (#14596231) Homepage
      You're right and you're wrong. I myself started with T-cards and 36gig cheetahs. It was amazing after a life of cheap low performant IDE (college student at the time). But shit kept breaking, the hacks kept getting worse and worse, the duct tape bill started getting too big and I just got tired of it. Drives would go offline and there was no hotswap support... kiss your uptime goodbye.

      So I exactly that, went on ebay and bought a pair of photons. Only 5100's, but 28 drives was pretty nice.

      I was pretty undewhelmed. They were a steal when I got them (well, a "good" price when you factor shipping), but the performance was never there even with really good 10k6 cheetah's. RAID never helped, no matter how it was configured. It just didnt seem that useful.

      Plus the A5200's weigh 125lbs and hauling them between dorm rooms proved less than fun.

      And even locked my basement closet I could hear the roar of two A5100's. I'd been "meaning" to get rid of them for a while, but now that I'm changing states... it was finally time. I sold em on craigslist for $280 for both. Same I bought em for, and taht includes shipping.

      I dunno, If I were anyone with a brain, I'd wait another year for SAS go to ape-shit on everyone. The enclosure-hostcontroller system is a smart breakdown that'll really help beat away the single-vendor-solution... the reason everyone can charge so much for hw now is that everything is one unit, the enclosures, the controller, its a big package with a nice margin. when XYZ company can come along and sell you a 24 drive enclosure for pennies that you can plug in to a retail SAS controller... its a game changer. Just watch the rediculous margins drop.

      If you need something now, just get SATA raid. Intel's new IO processor is amazing, it'll give you really nice performance. But otherwise, I'd say wait for SAS. I suppose its still more expensive than a pair of A5100's, but I'd wager the performance will be better.

      As a side note, I sometimes wonder whether the fibre cabling i bought was bad. I really couldnt sustain more than 40 MB/s even doing XFS linear copies, even with 14 drives dedicated to the task. I'm not sure if bad cabling would've given me some kind of overt error, or might have just quietly degraded my performance.

      Myren
      • by Anonymous Coward
        I really couldnt sustain more than 40 MB/s even doing XFS linear copies

        40MB/s seems awfully close to a saturated SCSI bus of that era. E.g., Sun Ultra workstations have 40MB/s SCSI busses in them. Probably the way to boost performance is to have multiple arrays, each using a dedicated controller in the workstation/server (ensure controllers are on separate PCI busses, too) and the mirror set up across the controllers (gee, three controllers, three arrays, say five disks per array...I'll let you cover the
      • I don't have a ton of fibre experience, but I think the only 'bad' fibre is broken (dark) fibre, and you wouldn't get ANYthing through that. I would suspect a software issue.
    • I'm not sure $2000-$3000 a pop is 'at home' price point, just yet, for a RAID array of HDDs.
    • I acquired two A5000s and two Sun e4500s for $500. Total raw storage is 1TB, but I unfortunately can't run them all at the same time...I trip the breaker in my apartment. Yeah, they might be cheap, but power-wise, they aren't.
  • It actually looks like SATA has a higher potential speed than FC, though it's not designed to act as a bus like FC is. I suspect a machine with a multiple SATA RAID controller will beat the equivalent FC solution, though perhaps with less failover capability.
    • Re:speedwise (Score:3, Informative)

      Really? You think this is true, when I can continue to just throw more host bus adapters into the machine, and more drives on the array? FC is meant to allow for up to 64k drives, you know. All striped if that's how I want it. And with many, many gigabit adapters attached to that same fabric.
    • My memory recalls (Score:5, Informative)

      by DaedalusHKX ( 660194 ) on Sunday January 29, 2006 @07:37PM (#14595301) Journal
      The major thing about both SCSI and FC was that both of those designs imply greater redundance. Serial ATA provides one major thing which Parallel ATA could almost do. High THROUGHPUT... but no redundancy and higher CPU usage than SCSI chipsets.

      Transactions make significant use of CPU resources in ATA based systems. The only cards I am aware of that move ATA transactions to hardware come from Advansys and Adaptec, both use SCSI chipsets but ATA translators. Available for about 190.00 each but hard to find.

      On a PII 450 the chipset usage on a softraid was almost 40% when at full throttle with 3 P/ATA drives. Same system went down a tidbit using a PCI card (not sure why actually, presumably some operations are offboarded by the PCI card thus reducing the overhead, but don't quote me on it).

      The main problem with all of this is cost vs redundancy vs speed... which brings us to the old issue we found with Cars. the FAST, GOOD, CHEAP pyramid.

                          FAST
                        / \
                GOOD-----CHEAP

      CHEAP + FAST != GOOD
      GOOD + CHEAP != FAST
      FAST + GOOD != CHEAP

      My suggestion since my own buying experience with FC is limited (read as NONEXISTANT) I can only tell you what I've learned on SCSI vs PATA/SATA rigs.

      SATA systems that do not use an Adaptec chipset are usually mere translators that make use of CPU resources to monitor drive activities. 3Ware cards seem to transcend this limitation rather well and they provide a fine hardware RAID setup for ATA drives.
      PATA systems that do not use an Adaptec chipset are likewise mere translators.
      SCSI interfaces make use of a hardware chipset that monitors and controls each drive... thus relieving the CPU of being abused by drive intensive operations (think of a high provile FTP or CVS/Subversion server and you get the idea of what would happen to the CPU if it was forced to perform the duties of server processor AND software RAID monitor).

      Onto bandwidth. Serial ATA setups suffer from a similar issue I've found with almost all setups. Without separate controllers, SATA setups and PATA setups split up the total bandwidth to the maximum number of active drives. More recent controllers available via SATA may have fixed this issue, but in general, I've found that my systems slowed down in data transfer rate when using drives or arrays found on the same card. SCSI seems to bypass this completely by providing each drive with a dedicated pipeline, but I am not sure what the set ammount is, there is an issue however, as some of the older chipsets DO have some issues when handling a full set of 15 drives. The primer on the article link at Seagate is on the money in their LVD vs FC link/FAQ. You will most likely require a backplane and full setup, and it probably won't be cheap. The big difference will be that FC setups are usually bulletproof, and can support MASSIVE amounts of drives (128 or 125 I think...), with the only thing being faster being a VERY high end Solid State drive or array. Those also have a very small amount of latency, but as far as speed, setup a good ram drive on a dedicated memory bank with backup battery and EMP shielding... okay, that's way expensive :)

      ~D

      PS - I understand that I've not answered your question but I should've simplified this for everyone else out there.
    • Re:speedwise (Score:3, Informative)

      by sconeu ( 64226 )
      Say what?

      When I worked at an FC startup, FC was 4Gbps full duplex, and 10Gbps was in the works.
  • eSATA (Score:1, Interesting)

    by Anonymous Coward
    Would the new eSATA external SATA interface be fast enough for your purposes?
    • Yeah, I was thinking his problem sounded like a good choice might be an ATA over Ethernet-type solution, like Coraid. [coraid.com]
      • Whenever I look at those CORAID boxes, compared to something like iSCSI or FC, I seriously wonder why they built them. (even heard from someone who uses one that he hasn't gotten very good performance)

        Honestly, it seems like that company's products are only popular because they got a post on Slashdot. Not sure why I'd ever want one of their boxes, as the whole concept just feels "wrong". Couldn't they just have put that energy into something like low-cost iSCSI or FC boxes? (using SATA drives, of course)
    • Re:eSATA (Score:4, Interesting)

      by TinyManCan ( 580322 ) on Sunday January 29, 2006 @11:23PM (#14596098) Homepage
      eSATA is getting closer, but I believe the real long term answer is going to be iSCSI.

      I used to be really against iSCSI, as the native stacks on various OSes just did not deal with it well. By that I mean that a 50 MB/s file transfer would consume almost 100% of a 3ghz CPU. Also, the hard limit on gig-e transfers of 85 MB/s (TCP/IP overhead + iSCSI overhead) was just too low.

      Now, that has all changed. Not only can you get TCP/IP Offload Engines for just about every OS (I don't work with Windows, so I don't know what the status of that is). Also, 10 gigabit ethernet has become financially reasonable.

      For instance, the T210-cx [chelsio.com] is around $800, and will deliver a sustained 600 MB/s (not peak or any other crap). Also, the latency on a 1500 MTU 10-gbs ethernet fabric is something to behold.

      I think by the end of this year, we will see iSCSI devices on 10gbe that out-perform traditional SAN equipment in the 2gbs evironment, in every respect (including price), by a large margin. 4gbs SAN could come close, but I still think hardware accelerated iSCSI has a _ton_ of potential.

      If I were starting a storage company today, I would be focusing exclusively on the 10gbs iSCSI market. It is going to explode this year.

      • I used to be really against iSCSI, as the native stacks on various OSes just did not deal with it well. By that I mean that a 50 MB/s file transfer would consume almost 100% of a 3ghz CPU. Also, the hard limit on gig-e transfers of 85 MB/s (TCP/IP overhead + iSCSI overhead) was just too low.

        Now, that has all changed. Not only can you get TCP/IP Offload Engines for just about every OS (I don't work with Windows, so I don't know what the status of that is). Also, 10 gigabit ethernet has become financially r

  • Try areca (Score:3, Interesting)

    by Anonymous Coward on Sunday January 29, 2006 @07:23PM (#14595236)
    You can get a sata-II to FC adapter from areca, these are pretty expensive, but the nice thing is that you don't need a motherboard in your case. Combine it with a chenbro 3U 16 bay case and you have a relatively affordable setup.

    http://www.areca.com.tw/products/html/fibre-sata.h tm [areca.com.tw]
    • Re:Try areca (Score:2, Interesting)

      by Loualbano2 ( 98133 )
      Where can you go to order these? I did a froogle search and got nothing and the "where to buy" section doesn't seem to have any websites to look at prices.

      -Fran
      • Last week I bought an Areca 1120 and battery module from the Tekram Online Store [tekramonline.com]. It seems to work well, though my system bogs down on heavy writes. I'm using RHEL4u2 with a pre-built driver, I haven't tried the newer driver in the latest vanilla -mm kernel.

        I have 6 Maxtor MaxLine Pro 500GB SATA-II drives connected to it and configured it in a hardware raid6 config. Writes are fairly slow (~32MB/s) while reads are not too shabby (200MB/s). The card does have some really nice features, cli and http tools for
  • alternative to FC (Score:4, Informative)

    by Yonder Way ( 603108 ) on Sunday January 29, 2006 @07:36PM (#14595293)
    Aside from Fiber Channel, you could roll your own with iSCSI or ATAoE (ATA over Ethernet). This way you could take advantage of existing ethernet infrastructure and expertise, and partition off a storage VLAN for all of your DASD.
    • If you don't have experience trouble-shooting fibre channel, and do have experience trouble-shooting ethernet, this is definitely the way to go.

      Fibre channel is its own world, and there's no real carry-over from IP networking skills. No software anaylzer. No 'ping' command. Of course, if i all works when you plug it together, then no worries!
      • I found it very easy to transition from "Standard Network Guy" to "Standard Network Guy+SAN". Although it is quite painful to not have a sniffer (unless you have a Cisco MDS9000 in which Ethereal works just fine) but you can make do by having verbose logging turned on for the HBAs and keeping track of the FCID s of your devices.

        Also the system tends to tell you sooner rather later about issues on your fabric or loop.
  • by Liquid-Gecka ( 319494 ) on Sunday January 29, 2006 @07:38PM (#14595305)

    Speaking as the owner of two XServe RAID devices (5TB and 7TB models) as well as several other Fibre Channel devices I can say that the Apple Fibre Channel is by no means slow. Each SATA drive has pretty much equal performance to the SCSI drives we use in our Dell head node. Combined together there are times where we can pull several hundred megs a second off the XRAID's. Plus our XRAID has been fairly immune to failures thusfar. I have yanked drives out of it and it just keeps right on going.

    Another little hint, if you are really worried about speed you can just install large high RPM sata drives yourself. Its not that hard to do at all.

    Check out Alien Raid [alienraid.org] for more information.

    • Actually I don't know why he'd need 5TB or 7TB, but with SATA drives this would be relatively easy to achieve since most of these are relatively huge. (SCSI drives by comparison get extremely expensive for even 36GB drives ($200.00 or so a piece, plus host controller, etc). Prices may have changed since I worked IT, but I recall even the refurbs being very pricey. SATA drives at $200.00 can give you a pretty nice 400 GB or so. As you've said, this can easilly solve your "cost" issues.

      SCSI Drives
      1000G (r
      • Why buy so small drives (36GB) when needing so many (28)?
        • Because they get FAR more expensive as you go up. Okay, so here's the next setups.

          10k rpm (not 15000 mind you) Seagate and Maxtor. Cheapest entries on pricewatch both 495 plus 5 or so shipping UPS ground.

          SCSI U320 (nice) and of course they're 300 gigs. GREAT!

          Okay now here's the kicker. ALL the other suppliers provide them at 640.00 USD or more. You save some, but the downside I wager is that they won't carry enough to build a large RAID 5 setup. Also, the smaller drives are faster in a large array bec
      • First of all, a good 400GB SATA drive [newegg.com] runs around $300. Note that it isn't even SATA2 which one would probably want for a high performance RAID (NCQ et al).

        Second, 36GB is hardly the top end for SCSI drives. For a little more than $300 you can get 147GB SCSI drives [newegg.com]. Also keep in mind that more drives means more striped performance. So more drives isn't necessarily a bad thing.

        Granted, it is still more expensive for teh SCSI setup. I just think you should make a fair comparison.

        -matthew
        • On the flip side, more drives = more power usage (and probably more heat?).

          Compromises aplenty when building RAID arrays. Performance, heat, power usage, noise, cost... etc.

    • Just to point out...the XServe RAID uses Ultra ATA drives, *NOT* SATA drives. I spent the last month or two researching RAID arrays, and that was one of the most disappointing things I saw...

      Xserve RAID features a breakthrough Apple-designed architecture that combines affordable, high-capacity Ultra ATA drive technology with an industry standard 2Gb Fibre Channel interface for reliable... from Apple [apple.com].

      • I would so love to prove you wrong right now. Turns out you are completly correct. both of our XServe RAID devices use 7200RPM Ultra IDE Hitachi drives. This invalidates at least some of our benchmarking as it was done single drive on our XServe systems (what where supposed to be like drives but are serial ATA drives.) all of our single drive benchmarks are invalid then (or rather, are meaningless to this discussion =) However, the XServe RAID still performed very well when doing a 5 disk vs 5 disk RAID set

        • 48Gigabits per second sounds pretty decent to me, unless you meant 6Gigabits per second?
        • The Cisco Catalyst 4000/4500-series switches (and it's stackable equivalent, whose model number eludes me right now) are not meant for high performance, but rather port density and port aggregation (wiring closets, desktops, VOIP, wireless). For high performance switching, the 3750-series, the 4948 and the 6500-series switches would be much better options for bandwidth aggregation, high-performance, and clustering.

          Where I work, we use the 4000-series switches to get 192 10/100 ports available where performa
      • Not sure why the interface matters that much in the XServe RAID case. Each drive is on it's own channel it can't saturate from the platters. Sure, the newer SATA drives are in general faster than the U-ATA drives in the XServe RAID, but that _seems_to_me_ to be independent of the interface.
        • It doesn't really. I was just being anal. I wanted the SATA drives simply because I think they'll outlast the IDE drives (in how long manufacturers keep making them), and I plan on keeping my RAID for quite awhile. But yeah, it'll saturate the connection long before it saturates all the individual drives.
  • Infortrend is crap! Stay far, far away from anything produced by them. I'd also warn against purchasing from one of their US distributors - Zzyzx - but that's not an issue since the company just went out of business.
    • I haven't had a lot of experience with their fibre channel stuff (one system only), but my experience with infortrend SCSI arrays has been very good.

      When I worked at UUNET (1997-2000) we had hundreds of servers with infortrend-based arrays as their storage.

      I've had reasonable service, great pricing, and bad support from CAEN Engineering, one of their resellers. I have heard good things about Western Scientific.
    • I'm sorry, but that's simply not true. Have you any experiences to back up your claims?

      My company is a time honoured user of Infortend Arrays - supported in the UK by European Raid Arrays (e-raid.co.uk).

      A good number of the Soho Film industry in London runs on Infortrend RAIDs supplied by ERA.
      We've never lost any data on the RAID as long as I've been here (5yrs+), had minimal drive failures and - I think - two PSU failures.
      We have many TB of RAID and numerous RAIDs (FC, SCSI and SATA).

      But then, we do keep
  • I think this site may have been on slashdot a few years back.

    http://web.archive.org/web/20010406133220/www.tcnj .edu/~feuss2/fibre/fibre.html [archive.org]
  • You say you want inexpensive. Yet you want fibre channel. I think you're looking in the wrong direction. FC is cute to experiment with, but not really feasible for your purposes.

    In your question, you said you don't want a whole lot of redundancy or high availability. You can do nearly the same with an inexpensive computer with a large raid, gigabit ethernet, and NFS or Samba.

    If there's money riding on this (i.e. you will lose big money each second the connection is down), then you need FC and service contra
  • We personally use a StorCase infostation at work (http://www.storcase.com/infostation/ifs_ovrvw.asp [storcase.com]). Now, we have the scsi version, so I can't speak to the Fiber version, but a fully loaded Storcase is cheaper then an XServe, and more dense. Alas, it would not come with the instant solution tech support that an XServe would.
  • by sirwired ( 27582 ) on Sunday January 29, 2006 @09:05PM (#14595670)
    First, you haven't articulated your needs properly. "High I/O rates" means two separate things, both of which must be considered and engineered for:

    1) High numbers of transactions per second. Your focus here is going to be on units that can hold a LOT of drives (not necessarily of high capacity). You want as many sets of drive heads as possible going here. In addition, SATA drives are not made to handle high duty-cycles of high transaction rates. The voice coils have insufficient heat dispersion. (They are just plain cheaper drives.) High transaction rates require a pretty expensive controller, and you won't be able to avoid redundancy, but that isn't a problem, since you are going to need both controllers to support the IOPS. (I/O's per second.)

    2) High raw bandwidth. If you need raw bandwidth and your data load is non-cacheable, then really software RAID + a JBOD may be able to get the job done here, if you have a multi-CPU box so one CPU can do the RAID-ing. Again, two controllers are usually going to provide you with the best bandwidth. SATA striped across a sufficient number of drives can give you fairly decent I/O, but not as good as FC.

    There are "low end" arrays available that will offer reduced redundancy. The IBM DS 400 is an example. This box is OEM'd from Adaptec, and pretty much uses a ServRAID adapter as the "guts" of the box. This unit uses FC on the host side, and SCSI drives on the disk side. It is available in single and dual controller models. (Obligatory note: I do work for IBM, but I am not a sales drone.) This setup has the distinct advantage of being fully supportable by your vendor. A homegrown box will not have that important advantage.

    Don't be scared away by the list price, as nobody ever pays it. Contact your local IBM branch office (or IBM business partner/reseller), and they can hook you up.

    This unit is also available as part of an "IBM SAN Starter Kit" which gives you a couple of bargain-barrel FC switches, four HBA's, and one of these arrays w/ .5TB of disk. (I am writing a book on configuring the thing in March, so you will have a full configuration guide (with pretty pictures and step-by-step instructions) by the beginning of April.)

    SirWired
  • by postbigbang ( 761081 ) on Sunday January 29, 2006 @09:34PM (#14595739)
    The answer has a lot to do with the previously mentioned I/O goals that you have. Let me try to answer this in a taxonomy. This rambles for a bit but bear with me.

    Case One

    Let's say this array is to be used for a single application that needs lots of pull and is populated initially from other sources, with a low delta of updates. In other words, largely reads vs writes. Cacheing may help; and if so, then you can tune the app (and the OS) to get fairly good performance from SATA RAID or through FC JBODs when in a Raid 0 or 5 configuration. (there is no Raid 0; its just a striped array without redundancy/availability and is therefore a misnomer)

    Case Two

    Maybe you need a more generalized SAN, as it will be hit by a number of machines with a number of apps. You'll need better controller logic. You'll likely initially need a SAN that has a single SCSI LUN appearance, where you can log on to the SAN via IP for external control of the can that stores the drives (and controls the RAID level, and so on). This is how the early Xserve RAID worked, and how many small SAN subsystems work. Here, the I/O blocks/problems come at different places-- mostly at the LUN when the drive is being hit by multiple requests from different apps connected via (hopefully) an FC non-blocking switch (Think an old eBay-purchased Brocade Silkworm, etc). SCSI won't necessarily help you much.... and a SATA array has the same LUN point block. Contention is the problem here; delivery is a secondary issue unless you're looking for superlative performance with calculated streams.

    Case Three

    Maybe you're streaming or rendering and need concurrent paths in an isochronous arrangement with low latency but fairly low data rates-- just many of them concurrently. Studio editing; rendering farms, etc. Here's where a fat server connecting a resilient array works well. Consider a server that uses a fast, cached, PCI-X controller connected to a fat line of JBOD arrays. The server costs a few bucks, as does the controller, but the JBOD cans and drives are fairly inexpensive and can be high-duration/streaming devices. You need to have a server whose PCI-X array isn't somehow trampled by a slow, non-PCI-X GBE controller as non-PCI-X devices will slow down the bus. You also get the flexibility of hanging additional items off the FC buses, then adding FC switches as the need arises. At some point, the server becomes more useless in cache and becomes its own botleneck-- but you'll have proven your point and will have what now amounts to a real SAN with real switches and real devices.

    The SATA vs SCSI argument is somewhat moot. Unless you cache the SATA drives, they're simply 2/3rd the possible speed (at best) of a high-RPM SCSI/FC drive. It's that simple. uSATA will come one day, then uSATA/hi-RPM..... and they'll catch up until the 30Krpm SCSI drives appear.... with higher density platters....and the cost will shrink some more.

    I've been doing this since a 5MB hard drive was a big deal. SCSI drives will continue to lead SATA for a while, but SATA will eventually catch up. In the mean time, watch the specs and don't be afraid of configuring your own JBOD. And if you want someone to yell at, the Xserve RAID is as good as the next one.... except that it has the Apple Sex Appeal that seems a bit much on a device that I want to hide in a rack in another building.
  • Try AoE instead (Score:3, Insightful)

    by color of static ( 16129 ) <smasters&ieee,org> on Sunday January 29, 2006 @09:41PM (#14595767) Homepage Journal
    Fiber channel just seems to have to high a cost of entry these days (or maybe it always have :-). It's not bad today with SATA being used on the storage arrays, but it is hard to compete with the other emerging standards. I've been using AoE for a little while now and have been impressed with the bang for the buck.
        A GigE switch is cheap, and a GigE port is easy to add, or you can use the existing one on a system. AoE sits down below the IP stack so there is little overhead for comm, and it looks like a SATA drive in most ways. The primary vendor's appliance (www.coraid.com) will take a rack full of SATA and make it look like one drive via various RAID configs.
        Yeah FC is faster, but how many drives are going to be talking at once? Are you really going to fill the GigE and need a FC to alleviate the bottleneck? If you are then FC is probably not the right solution for you anyway.
        Your mileage may vary, but I expect anyone will get comparable results for the price, and many will get excellent results overall.
    • AoE is fantastic and should scale right past the bus speed of your head system - depending on how many interface (PCI-X, etc.) slots it has capable of full-gigabit speed, your biggest limitation will be the backplane speed of your motherboard or switch.

      Of course, latency is still probably going to be an issue running on 'commodity' hardware - remember that, even though LAN connectivity works well for many things, it's designed for the general case and doesn't work perfectly for everything. That's where Inf
  • by Animixer ( 134376 ) on Sunday January 29, 2006 @09:57PM (#14595810)
    I can toss in a bit of advice as I've been working with fibre channel from the low to the high end for several years now. Currently I'm managing a lab with equipment from EMC, HDS, McData, Brocade, Qlogic, Emulex, 3par, HP, Sun, etc. from the top to bottom of the food chain. I'm personnaly running a small FCAL setup at home for my fileserver.

    0. Get everything off of ebay.
    1. Stick with 1GB speed equipment. It's older, but an order of magnitude less expensive.
    2. Avoid optical connections if you can -- for a small configuration, copper is just fine and often a lot less expensive. Fibre is good for long distance hauls and >1gb speed.
    3. Pick up a server board with 64bit pci slots, preferably at 66mhz.
    4. Buy a couple of qlogic 2200/66's. These are solid cards, and are trivial to flash to sun's native leadville fcode if you desire to use a sparc system and the native fibre tools. They also work well on linux/x86 and linux/sparc64. These should run about $25 each.
    5. Don't buy a raid enclosure. Get a fibre jbod. You can always reconstruct a software raid set if your host explodes if you write down the information. If you blow a raid controller, you're screwed. Besides, you won't want to pay for a good hardware raid solution, and I have yet to see a card-based raid; they're always integrated into the array. I recommend a Clariion DAE/R. Make sure you get one with the sleds. These have db-9 copper connections and empty should run about $200. Buy 2 or 4 of these, depending on how many hbas you have. They'll often come with some old barracuda 9's. Trash those; they're pretty worthless.
    6. Fill the enclosures with seagate fc disks. If you're not after maximum size, the 9gb and 18gb cheetahs are cheap, usually like $10 a pop on ebay and are often offered in large lots. They are so inexpensive it's hard to pass them up. Try to get ST3xxxFC series, but do NOT buy ST3xxxFCV. The V indicates a larger cache, but also a different block format for some EMC controllers. They are a bitch to get normal disk firmware on.
    7. Run a link from each enclosure to each hba. Say you have 2 enclosures with 10 disks each. Simple enough; and 1gb second up and down on each link.
    8. Use linux software raid to make a bigass stripe across all the disks in one enclosure, repeat on the second enclosure, and make a raid10 out of the two. Tuning the stripe size will depend on the application; 32k is a good starting point.

    With that setup, you should pretty much max out the bandwidth of a single 1gb link on each enclosure, and enjoy both performance and reduncancy with the software raid, and not have to worry about any raid controllers crapping out on you.

    You should be able to get two enclosures, 20 disks, a couple of copper interconnects and some older hbas for about $750 to $1,000 depending on ebay and shipping costs.

    This should net you some pretty damn reasonable disk performance for random access type io. This is NOT the right approach if you're lookign for large amounts of storage. You'll get the raid10 redundancy in my example, but if you want real redundancy (and i mean performance-wise, not just availability -- you can drive a fucking truck over a symmetrix, then saw it in half, and you probably won't notice a blip in the PERFORMANCE of the array--something fidelity and whatnot tends to like) you have to pay big money for it. The huge arrays are more about not ever quivvering in their performance no matter what fails.

    Hope this was of some use.
    • You just spent $750 to $1000 on a 90GB storage subsystem with only 1gb write bandwidth. Do you really think that's such a swell deal?
      • Plus, you'll spend $25 a month on electricity for it.
      • Couple of replies. Yes, I made a mistake when I said how to do the striped mirror; svm does handle it behind the scenes but on linux you do have to make the mirrors first then stripe. Thank you for pointing that out.

        Second, Yes, I did pay $750 for 90gb of storage with 1gbit write bandwidth. It is highly redundant, and has effectively 10 spindles in the stripe; with what the original poster was implying (lots of random IO) this would be quite high-performance with the data striped across 10 drives with t
    • I had a setup using one of those CLARiiON FC boxes a while back. Used 20 x 36GB drives, 2 controllers, did a mix of hardware and software RAID, and managed to get >100MB/s sustained read performance (from the raw device, anyways). Only it had one major problem: power

      My setup consumed about a continuous 800W, not to mention any increase in air conditioning usage that resulted. I've since moved to a 4-drive 3ware setup, which is slower on raw reads (marginally, but not by enough for me to care), has abo
    • 8. Use linux software raid to make a bigass stripe across all the disks in one enclosure, repeat on the second enclosure, and make a raid10 out of the two. Tuning the stripe size will depend on the application; 32k is a good starting point.

      That's RAID 0+1, not RAID 10. Under Solaris Disksuite, it looks like RAID 0+1 but actually behaves like RAID 10; be sure of what you're actually doing on your setup.

      For those not aware of the difference, RAID 0+1 is two stripes mirrored; if you lose a disk in one st

  • How about iSCSI? (Score:3, Insightful)

    by MikeDawg ( 721537 ) on Sunday January 29, 2006 @09:58PM (#14595821) Homepage Journal
    Depending on your companies needs, could an iSCSI [wikipedia.org] solution be more viable. There are some very good units out there with loads of various different RAID setups. There are some trade-offs vs. Fibre Channel, such as speed vs. cost etc. I've seen quite a bit of data being handled to/from iSCSI arrays quite nicely. However, the companies I worked for had no true need for the blindingly fast speed, and extremely high cost of FC arrays.
    • ATA over Ethernet seems like an even better choice for small biz, than obsolete fibre channel. In some cases it may be a better choice over new fibre channel.

      Heres a little write up on it: http://linuxdevices.com/news/NS3189760067.html [linuxdevices.com]

      It mentions how you can use the ata over ethernet in combination with iSCSI. The ATAoE protocol has much less overhead than iSCSI, because ATAoE is not using TCPIP, rather it is its own non-routable protocol to be used for local storage using the ethernet hardware. It is ex
  • by slazar ( 527381 )
    take a look at what HP/Compaq has to offer. We have a few arrays from them, both SATA and SCSI. Not all Fiber channel though.
  • My suggestion to the OP is that if he wants to achieve a high I/O rate at the lowest possible price, then the solution if of course to use a Google-like solution: use commodity hardware.

    For example buy a lot of ATA/SATA harddisks (in order to spread the load over them), use them inside ATA-over-Ethernet enclosures (www.coraid.com, Linux driver available in any 2.6 vanilla kernel), and connect them with multiple Gigabit Ethernet links to the storage server. And the best part in all of this: ATA/SATA is s

  • but its major design compromise is to use ATA drives, thus losing the high I/O rate of FC drives

    I'd recommend more SATA drives for the same price over fewer, more expensive FC drives. The differences in RAID controllers and number of drives has much more impact in array performance than interface technology. Since FC controllers and drives are more expensive, it's a disadvantage when you are tring to get high speed on a budget.

    Fibre channel storage has been filtering down from the rarefied heights of big
  • The coolest thing would be to turn a Linux box into a Hardware RAID controller. Most of the arrays out there do not run specialized firmware inside for the OS. They run Linux, VxWorks, Windows NT (*cough* EMC Clariion *cough*), of course, heavily customized versions of these OS's, with some having specialized ASICs inside (i.e. Engenio). The thing is, they change their HBAs into Targets, so that the Initiators (your client PC) can use their disks.

    I want to figure out how to do this with a Linux box. How cou
  • I run a SAN... (Score:5, Informative)

    by jmaslak ( 39422 ) on Monday January 30, 2006 @01:08AM (#14596415)
    I administer a decently sized storage subsystem connected to about 10 servers (half database servers, 1/4 large storage space but low speed requirement, 1/4 backup/tape/etc server).

    For a single server, a FC system seems like overkill to me. Buy a direct attached SCSI enclosure and be done with it.

    For 10 or more servers, sharing disk space, a SAN (FC IMHO, although iSCSI is acceptible if your servers all share the same security requirements - I.E. are all on the same port of your firewall) is the way to go.

    Here's what I see the benefits of a FC SAN as (if you don't need these benefits, you'll waste your money on the SAN if you buy it):

    1) High availability

    2) Good fault monitoring capability (me and my vendor both get paged if anything goes down, even as simple as a disk reporting errors)

    3) Good reporting capability. I can tell you how many transactions a second I process on which spindles, find sources of contention, know my peak disk activity times, etc.

    4) Typically good support by the vendor (when one of the two redundant storage processors simply *rebooted* unexepectedly, rather than my vendor saying, "Ah, that's a fluke, we're not going to do anything about it unless it comes back in again", they had a new storage processor to me within one hour)

    5) Can be connected to a large number of servers

    6) Good ones have good security systems (so I can allow servers 1 & 2 to access virtual disk 1, server 3 to access virtual disk 2, with no server seeing other servers' disks)

    7) Ease of adding disks. I can easily add 10 times the current capacity to the array with no downtime.

    8) LAN-free backups. You can block-copy data between the SAN and tape unit without ever touching the network.

    9) Multi-site support. You can run fiber channel a very long way, between buildings, sites, etc.

    10) Ability to snapshot and copy data. I can copy data from one storage system to antoher connected over the same FC fabric with a few mouse clicks. I can instantly take a snapshot of the data (for instance, prior to installing a Windows service pack or when forensic analysis is required) without the hosts even knowing I did it.

    Note that "large amounts of space" and "speed" aren't in the 10 things I thought of above. Really, that's secondary for most of my apps, even large databases, as in real use I'm not running into speed issues (nor would I on direct attached disks, I suspect). It's about a whole lot more than speed and space.
  • by nuxx ( 10153 ) on Monday January 30, 2006 @01:46AM (#14596519) Homepage
    I have done this using a Venus-brand 4-drive enclosure, some surplus Seagate FC drives from eBay, a custom-made backplane, a Mylex eXtremeRAID 3000 controller, and a 30m HSSDC DB9 controller from eBay.

    I located the array in the basement, and the computer was in my office. I had wonderful performance and no disk noise, which was quite nice...

    If you want photos, take a look here [nuxx.net].

    Also, while I sold off the rest of the kit, I've got the HSSDC DB9 cables left over. While they tend to go for quite a bit new (they are custom AMP cables) I'd be apt to sell them for cheap if another Slashdotter wants to do the same thing.
  • I've been considering the storage thing for a while now. My current configuration is a Broadcom RAIDcore 6x250GB RAID 5 in a dual Opteron system with PCI-X 64/133mhz slots. Given that it's a workstation Tyan board, it cost me a mint but I have oodles of bandwidth to play with. I've got a few other arrays in that machine on other controllers. The board also has U320 and was all set to buy some 15KRPM drives from eBay till I saw the benchmarks of WD's new Raptor 150 which seems to kill all but the top end
    • Your looking for iSCSI. Google for iscsi target drivers for linux and you can export files or drives to any other system. That takes care of the block device. Multiple reader and writer file systems can be had if you look and feel like spending.
    • iSCSI would seem to be your answer. Here's a quote from Netapps whitepaper on iSCSI:

      NetApp FC performs and scales somewhat better than iSCSI
      o FC had 11% more OLTP throughput
      o FC had 25% better OLTP response times


      So FC-AL on 2Gbps links has only 11% more bandwidth available and 25% faster response time. While that might be important to an enterprise it also points out that iSCSI over 1Gbps ethernet is really damn fast =)
  • If you are just having 'fun' with this - great. Otherwise, you can either get by with what can be done easy with SATA/SCSI and a cheep box, or you are into the low end raid section with things like Apple's box. Need over 180TB going over 1.5/3GB per second on a single system? Try something from here http://www.datadirectnet.com/ [datadirectnet.com].

On the eighth day, God created FORTRAN.

Working...