Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Power IT

UPS Setup For a Small/Mid-Size Company? 260

An anonymous reader writes "We're a small company employing ~30 people and we are becoming increasingly reliant on virtual servers. Unfortunately, the hosts they are on don't have redundant power supplies because we simply don't have the capacity. We currently have one UPS per rack, which gives us about two minutes. This may have been enough time when they were put in — they've been there for some time — but it isn't really enough time to shut everything down in the event of a failure. Domain Controllers alone may take up to 15 minutes. So I'm looking at upgrading the UPSs to ones that would preferably give us around 15 minutes of breathing space and send an email or text alert when a failure is detected. Something that could trigger shutdowns automatically would also be nice. Of course cost is a key factor too. so given all of the above, what does Slashdot recommend?"
This discussion has been archived. No new comments can be posted.

UPS Setup For a Small/Mid-Size Company?

Comments Filter:
  • by Anonymous Coward on Saturday February 13, 2010 @02:16PM (#31128532)
    should serve 2 purpose - give you temp power and keeps your IT guys fit
    • by biryokumaru ( 822262 ) * <biryokumaru@gmail.com> on Saturday February 13, 2010 @02:21PM (#31128590)
      This [videosurf.com] is a much better solution. Plus, it can melt your face!
    • Re: (Score:3, Insightful)

      by Larryish ( 1215510 )
      New UPS batteries and redundant backup generators sounds like the way to go. Even if your UPS only gives you 2 minutes, that should be enough time to fire up a generator.
      • Re: (Score:2, Interesting)

        by Anonymous Coward

        I second this. Get small consumer sized UPS units that can keep a server up for a few minutes. Then get a backup generator with an automatic transfer switch to keep the whole building alive until power returns.

        When sizing your generator, don't forget to include ancillary equipment such as routers and emergency lighting if you decide not to wire up the generator for the entire building. Otherwise, sizing should be easy. Just watch your watt-hour meter during a normal business day and add 25% or so to that as

        • you can also convert an old diesel Mercedes or truck engine with a large alternator for the job. Stay away from unleaded gas generators - the fuel is unstable and they are unreliable unless used regularly. If you can get natural gas at your location, they are a great option as well since you don't have to worry about running out of fuel or refilling for an extended outage. No matter what you use, be sure to set it up to run weekly for at least 10 or 15 minutes to insure it runs properly when it is actuall
        • Re: (Score:3, Informative)

          by Ihmhi ( 1206036 )

          DirecNIC (an ISP located in New Oreleans) managed to keep their operations going for the entire duration of the Hurricane Katrina disaster and long after that with practically no downtime thanks to their backup generators. They didn't just shut things down - they kept right on going. If you're in an area where power loss or some other sort of disaster is a real threat then this is absolutely the way to go. Bonus, you can pretty much keep the UPSes you have because they just need to keep the systems online l

        • Re: (Score:3, Interesting)

          by Muad'Dave ( 255648 )

          Be careful sizing a backup generator when a large part of its load will be switching supplies and UPSes. UPSes in particular have _very_ strange load waveforms due to the rectifiers that are used in the charging circuit. The harmonics passed back on the line can cause the generator to 'seek' trying to lock in to 50/60 Hz, which can cause significant damage.

          A permanent magnet generator can help, but they're a little more expensive.

          http://ecmweb.com/news/electric_ensuring_generator_ups/ [ecmweb.com]

          http://www.physicsforum [physicsforums.com]

  • by plopez ( 54068 ) on Saturday February 13, 2010 @02:17PM (#31128542) Journal

    This is sort of off topic, but when was the last time you tested the UPS units that were installed "some time ago". The batteries can eventually go flat. You better check what you have ASAP. You may need to replace them sooner than you think.

    I can't remember the brand, but some of the higher end UPS units I have used came with monitoring software. They software polled the UPS unit, and started the shutdown as soon as a power failure caused the switch over to battery.

    HTH.

    • Re: (Score:3, Insightful)

      by Lehk228 ( 705449 )
      Even a cheap $30 APC backup has a USB connector and windows recognizes it automatically and can start a shutdown / hibernate immediately when power goes out
    • by TheLink ( 130905 ) on Saturday February 13, 2010 @02:44PM (#31128778) Journal
      The trouble with most UPS battery tests is they involve putting the stuff on battery...

      If you have a server with redundant power supplies- then you can have each power supply attached to a different UPS, then you can test each UPS one by one, hopefully without the server going down due to one UPS failing the battery test.

      You don't necessarily want to shutdown immediately. I have my machine shutdown once the software thinks there's only X seconds of battery life left. Set X to something high enough so that there's enough time to shutdown AND cold boot AND shutdown again... Otherwise in event of a shutdown you will have to wait some hours till the UPS is charged enough before it is safe to power up - in case the power company or whoever cuts power on you half way during your boot up :).
      • The trouble with most UPS battery tests is they involve putting the stuff on battery... If you have a server with redundant power supplies- then you can have each power supply attached to a different UPS, then you can test each UPS one by one, hopefully without the server going down due to one UPS failing the battery test.

        The best way to test your power backup system is to throw the main switch and see what fails. How are you going to react in a power failure if you don't have that data? What good is your backup if it only works in carefully prepared, controlled and documented test scenarios?

        • Whatever you do, don't push the FM-2000 button. That's one expensive systems test.

          Our new datacenter back in 2002 had a scramble button that didn't have a cover, and it got bumped by janitors a couple times. Luckily this DC didn't have the FM-2000. We had sprinklers, and an AC condensor right above our EMC Symmetra. Wasn't nice coming in on a hot July Monday morning and finding the DC floor covered in water.
        • by plopez ( 54068 )

          Good tests. But first shutdown anything a power failure can hammer. I remember when a FNG manager told a PFY to "just unplug the UPS" at a place I was working. The UPS battery was flat. The server was running an expensive MS database product. Server crashed. Expensive DB product running on server did not recover. Guess who was in charge of doing some admin and DBA work? FNG manager and PFY were not the ones up at 1 am trying to recover and restart the DB engine w/o losing 20-30 million dollars of data.

          Yes w

        • Re: (Score:3, Insightful)

          by Idarubicin ( 579475 )

          The best way to test your power backup system is to throw the main switch and see what fails.

          I'd say that's the most rigorous, realistic way to test, but I'm not sure that it's the best way to test.

          So, you pulled the main breaker and took out all of the production servers because there was a problem with your UPS configuration software? Oops. Why didn't you try it on one server today, then one rack tomorrow, and then pull the plug on the whole system after close on business on Friday?

          Pulling the Big Red Switch is one good way to test, but it's not the only way, and it almost certainly should

          • I'd say that's the most rigorous, realistic way to test, but I'm not sure that it's the best way to test.

            So, you pulled the main breaker and took out all of the production servers because there was a problem with your UPS configuration software? Oops. Why didn't you try it on one server today, then one rack tomorrow, and then pull the plug on the whole system after close on business on Friday?

            What if the power goes out on Wednesday?

            • Re: (Score:3, Insightful)

              by Idarubicin ( 579475 )

              What if the power goes out on Wednesday?

              That's my point, really. Test in situations where a failure has less severe consequences and you can troubleshoot smaller pieces first. Do the full-on Wednesday-morning test after you can pass the other tests.

              While it's important to be able to recover from a power failure, it's also important that designing and testing your redundant power solution doesn't do more damage to the business than not testing.

            • The power going out on Wednesday is why you run your test on Friday--you test your protection to ensure that you can recover from a mid-week power failure by Thursday morning, but you don't run the actual test on Wednesday. You set your test up so that a catastrophic failure causes the least disruption.

              When I was in the Navy, we would run drills in the same manner--our response was the same whether the sub was running deep and fast or slow and shallow, so why run the drills in the riskier situation of deep

              • Re: (Score:3, Insightful)

                by Blkdeath ( 530393 )

                The power going out on Wednesday is why you run your test on Friday--you test your protection to ensure that you can recover from a mid-week power failure by Thursday morning, but you don't run the actual test on Wednesday. You set your test up so that a catastrophic failure causes the least disruption.

                When I was in the Navy, we would run drills in the same manner--our response was the same whether the sub was running deep and fast or slow and shallow, so why run the drills in the riskier situation of deep and fast, where a catastrophic failure in response to the drill could cause very bad problems very quickly?

                But in a data centre you don't face the risk of drowning and/or perishing. Also, it's really easy to convince yourself that your setup works because you've carefully and methodically powered down each backup source individually, but it's near impossible to determine the stress that would be placed on a network by a power failure unless you simulate one. Obviously you have to take steps before hand to make sure your stuff is reliable and you're not going to run a full shutdown while everyone is accessing the

                • Re: (Score:3, Insightful)

                  by Endo13 ( 1000782 )

                  The fact of the matter is, if you're not confident enough in your data centre's reliability to throw the switch, your data centre just isn't reliable enough.

                  Well, yeah. That's really the whole point in this discussion - they don't know it's reliable enough. Therefore, throwing the switch on the whole center at this point is foolish at best. You don't do that until you're reasonably confident based on other smaller tests you've done first. Which I believe was exactly what was being stated. Sometimes you have to learn to read between the lines, and understand things without having every little detail explicitly spelled out for you.

    • by rs79 ( 71822 ) <hostmaster@open-rsc.org> on Saturday February 13, 2010 @02:46PM (#31128798) Homepage

      I had a $3000 UPS to keep a big Sun alive. After a couple of years they're $99 on ebay with dead batteries. But they have some of the cleanest pure sine wave power you ever saw. Best. Inverters. Ever. Capish?

      The batteries last 5 years then you replace them, period. They're "Sealed lead acid" or "SLA", in plastic cases with two tabs. They come in various sizes. Get the same ones. Be careful where you get them. $8 batteries off ebay tend to be $40 in the wrong store, for example, one is the same as a chair lift, and the medical devices store that have them in stock want $40 ea. Of course shipping LEAD acid batteries ain't cheap.

      The batteries for the UPC2200, just as an example, are $150 new for the pair plus shipping. $99 for the chassis (plus shipping and they're ungodly heavy without the batteries) and you have a $3000 UPS.

      That'll keep a small server running for a while if you give them one each. But you'd have to be a bit of a dick to have a dozen of these running a dozen servers, what you want is a one ton 12V battery, the kind your phone CO might use, a huge ass inverter and some panic circuit to cut power over to battery when the line goes down. That's the proper way to do it. Once a year they come out and recharge your battery for a small fee. These batteries cost a grand or two but last a long time. Refurbs are fine.

      The other nice thing about big batteries is if you get wind or solar stuff added on the to the
      building you can just wire that power in to the battery with no charge controller. Cause, uh, there's no fear your solar panels are gonna overcharge a ONE TON battery.

  • Generator (Score:4, Informative)

    by binarylarry ( 1338699 ) on Saturday February 13, 2010 @02:18PM (#31128556)

    Get a generator that can power things from natural gas (or other available resource).

    So when the power goes out, it will be seconds before the generator kicks on and the UPS are just there to keep power available until the generator is ready.

    • For a Generator only 2 min battey time may be cutting it a little to close

      • Re:Generator (Score:5, Interesting)

        by afidel ( 530433 ) on Saturday February 13, 2010 @02:54PM (#31128852)
        Our diesel standby generators kick on and assume load in 20-30 seconds. Of course we have a maintenance contract and weekly generator tests to make sure that stays true, a neglected generator probably won't kick on at all.
        • That doesn't sound like something a 30-person company could pull off. Or are you just incredibly competent?
          • by afidel ( 530433 )
            Here's [electricge...direct.com] a Kohler 30kw generator with a claimed load transfer time of 10 seconds, $10k list so figure $20k installed with ATS. It's natural gas which is actually much easier at 30kw (my 100, 250, and 500kw sets are too big for the gas lines in our commercial park to handle so we use diesel).
          • by pla ( 258480 )
            That doesn't sound like something a 30-person company could pull off. Or are you just incredibly competent?

            Failover generators don't really count as rare or overly expensive items. $5k-$10k will get you a decent 20-25KW unit, under $1500 to have it properly installed (and I find most tech-oriented companies tend to have at least one licensed electrician on staff, which would make installation "free" from the PHB perspective), and make it someone's job to do a weekly test fire (takes all of 15 minutes).
          • That doesn't sound like something a 30-person company could pull off. Or are you just incredibly competent?

            That may be true, but I would go with propane anyway. Stored oil or gasoline can get nasty, and some events that take out the power may take out natural gas supplies also. A 17kW (140 amp/120VAC) propane or natural gas unit is about $3600 [amazon.com]. If they can't manage that then a 10kW (80 amp/120VAC) less then $2800 [amazon.com]. If you look on Amazon you may also find other units that will suffice. Many with free shipping. [amazon.com]

            Bare in mind you may need a "break before make" relay and you still want a short term UPS while the gen

    • Actually, a generator plus flywheel storage might be the way to go. Get some flywheel units that can run everything for maybe thirty seconds, and have plenty of time for the generator to kick in. Now downtime at all.

      • Re: (Score:3, Interesting)

        by Hadlock ( 143607 )

        Out company got to the point of almost bribing building management to let us put a generator on the roof for our use, since the cell companies already had diesel generators up there to power their cell phone antenna equipment. They still wouldn't let us though.

        • ...since the cell companies already had diesel generators up there...

          Then I suggest a getting few long extension cords...

          • by Hadlock ( 143607 )

            Or you know, just run some conduit down the utility shaft. A 100' cord would have worked too though, we were only two floors from the top of the building.

  • A second site (Score:3, Insightful)

    by Colin Smith ( 2679 ) on Saturday February 13, 2010 @02:20PM (#31128580)

    With redundant connection.

     

    • Re:A second site (Score:4, Interesting)

      by JWSmythe ( 446288 ) <jwsmythe@nospam.jwsmythe.com> on Saturday February 13, 2010 @02:38PM (#31128736) Homepage Journal

          There's an awful lot to be said for redundancy. I think he's talking in-house applications, but I'm not positive.

          One company I worked for, we maintained equipment in multiple datacenters, that were fully redundant. Normally, we served from all of them (no warm-standby sites). Over the years, we'd lose datacenters for various reasons. Sometimes it was power. Sometimes it was connectivity. Sometimes it was simple things, like our own hardware died. We've all seen where portions of the Internet can't reach other portions. Such redundancy will save you. It's better to have the reputation of "they just always work", rather than "they're down every time there's a problem in [insert area]".

          Most users won't say "thank you", but they'll be more than happy to complain when you're down. If you have such a presence, you're probably making money on it, so an hour of downtime can easily cost more than the cost of a couple redundant datacenters. With say 3 datacenters, I always made sure we had capacity at each datacenter, in case we had two sites fail simultaneously. While it seems like an almost unheard of event, we did have it happen a couple times in a decade. The providers will apologize profusely, but that doesn't make up for the money lost during the outage.

      • "With say 3 datacenters, I always made sure we had capacity at each datacenter"

        Yeah, sure, that's the way to go for a thirty people company. A price tag, please?

  • by Nuitari The Wiz ( 1123889 ) on Saturday February 13, 2010 @02:22PM (#31128594)

    Not knowing the load required on the UPS makes it very hard to tell what kind of UPS you need. You need to know how many watts are used in the rack to be able to plan some proper UPS capacity.

    apcupsd can be networked between machines and can trigger auto shutdowns of all of them, including VM guests.

    Some virtual machine system can also suspend all VMs on shutdown which could be a better alternative then shutting them down. Again, without knowing which VM system you use it's hard to get into details.

    • by TheLink ( 130905 )
      I do that on my home server - suspend VMs when there's only X seconds of battery life left.

      But if your virtual machines are huge (in mem), suspending all of them may take a fair bit of time. 4GB/50MB/sec = 80 seconds (100MB/sec = 40 secs). Some servers have 32GB of RAM.

      FWIW, I use Network UPS Tools and APC Back UPS CS 650. I have tried various cheaper UPSes (less than half the price), but I found during some power failures they don't switch in time (despite what their specs say) - which means my computer st
      • by afidel ( 530433 )
        Your experience is one of two main reasons I hate line interactive UPS's, the other is that if you get a big enough surge a double conversion UPS will usually self-sacrifice and protect the equipment. APC makes both and it's sometime hard to tease out which model is which technology, but for me it's worth the research effort to get the slightly more expensive double conversion units.
        • With APC it's the SmartUPS RT series and up (VT, large Symmetra, etc.) that are double-online; the SmartUPS and SmartUPS XL are line interactive. Another key is if it does that buck/boost thing: that's line interactive. A double-online never needs to do that because it doesn't pass line voltage directly to the load.

          • by afidel ( 530433 )
            Thanks! I wish our VAR had simply told us that, would have saved quite a bit of reading multiple spec sheets and whitepapers for each unit to figure out which tech they used (as you say buck and boost and a couple other keywords were the key to figuring which were line-interactive).
  • by narziss ( 600006 ) on Saturday February 13, 2010 @02:22PM (#31128598)
    It's not about the amount of people, servers, or a fixed time limit to preserve power. First and foremost, you need to identify what the critical systems are that need to be protected. These may include the VM farms, NAS storage, obviously the underlying network infrastructure, and at the very least, some management terminals that can be used in the event of a failure. Once you identify these systems you need to reference the electrical in/output specifications. If possible, you would want to measure the real requirements in production with inline monitors or passive taps. After you have built your requirement set (mind you, you may decide it's better to have a few small UPS vs one very very large one) you need to explore what needs to be up, and for how long, and build yourself a model. There are dozens of UPS manufacturers, and tens of thousands of combinations for any sized company. Once you have an outline of the systems and their individual power requirements, coupled with your own requirements for their availability/protected power, it will be relatively easy to build yourself a good level of protection on a small budget. Mind you these devices (UPS) can often be found on the second hand market due to company refresh, datacenter closures, etc. Many can be easily re-certified by the manufacturer directly or a variety of 3rd party vendors who specialize in this type of infrastructure.
    • by JWSmythe ( 446288 ) <jwsmythe@nospam.jwsmythe.com> on Saturday February 13, 2010 @02:48PM (#31128808) Homepage Journal

      It's not about the amount of people, servers, or a fixed time limit to preserve power.

          You're absolutely right. One place I worked had about 20 employees, 150 servers, but had an income of millions per year. The income averaged out to about $5,700/hr. 12 hours of outages per year could cost almost $70,000 in lost revenue. Is it worth $10k in extra equipment to mitigate that? Obviously.

          Smaller companies have to evaluate their acceptable losses. Sometimes it's not worth $100 to make sure you stay up through power outages.

        "5 9's" of reliability still leaves 1.14 hours per year of outages. Of course, that doesn't assume that it's all power related outages. Redundancy across physically diverse locations can and will help there.

      • by afidel ( 530433 )
        The problem is without enough UPS to cleanly shutdown your brief 30 second power outage can turn into a whole day or multiday affair of repairing boxes and verifying and fixing data integrity issues.
        • That all depends though.

          I have a site that gets decent traffic. It makes a few hundred dollars per year. It's not worth it for me to spend even $100 on even a good UPS.

          You have to consider the need. If it makes a few hundred dollars per year, 2 days of outage is a trivial cost. Say at $1,200/yr ($100/mo, a high estimate), an outage of a full day has a lost income of $3.28. I fix whatever breaks in my spare time, so my manhour expense is $0. If my server wer

        • Re: (Score:2, Funny)

          by trum4n ( 982031 )
          You run Windows Vista on your servers?
          • by afidel ( 530433 )
            Actually Windows tends to do the best with unplanned shutdowns, I've had way worse times with Solaris and Linux boxes then I ever had with Windows. I hear AIX and zOS are even worse, but they are generally used in professional datacenters and so unplanned power outages are much less common.
      • "5 9's" of reliability still leaves 1.14 hours per year of outages.

        5 nines - 99.999% - reliability is about 5 minutes [google.com] of downtime per year, not more than an hour.

  • Diesel (Score:5, Insightful)

    by mangu ( 126918 ) on Saturday February 13, 2010 @02:23PM (#31128612)

    No matter how much battery capacity you have, it will eventually run out. If your site truly needs availability, you have to get a diesel generator.

    • Re:Diesel (Score:5, Insightful)

      by Colin Smith ( 2679 ) on Saturday February 13, 2010 @02:40PM (#31128746)

      If your site truly needs availability, you have to get a diesel generator.

      lol.

      If your site truly needs availability, you need a second site.

       

      • by mangu ( 126918 )

        If your site truly needs availability, you need a second site.

        Not if you have adequate fire protection and it's not an area subject to earthquakes, hurricanes, or floods.

        No need to overdesign. Having a well designed system also means complying to a budget.

        • Not if you have adequate fire protection and it's not an area subject to earthquakes, hurricanes, or floods.

          And there is never any construction which might sever a fiber.

          And there are never any vehicles carrying hazardous materials in the neighborhood.

          And there aren't any natural gas lines within a few hundred yards of your facility.

          And there's no chance a deranged wingnut will get out his sniper rifle and shut down the area.

          And the sewer line never clogs and backs up.

          Past a certain point, hardening a single site against all possible disasters, inconveniences, flukes, and freak occurrences (those you c

        • New York isn't subject to earthquakes, hurricanes or floods. If you'll recall the former occupants of the World Trade Center had a bit of a problem with low flying planes though. Those with a second site were up the same day. The guys that didn't... hooboy.

          "Overdesign" depends on your requirements. Billy-Bob's Bargain Basement Hosting doesn't require high availability. If you really need high availability, you don't just have a second site, you have a third or a fourth. You also need a disaster recovery pla

        • Unless your site catches fire and the fire department insist you cut the power before they'll enter as happened to ThePlanet.

          http://www.datacenterknowledge.com/archives/2008/06/01/explosion-at-the-planet-causes-major-outage/ [datacenterknowledge.com]

          "The fire department is not allowing the company to run backup generators, so the facility has been without power since the incident occurred."

          Adequate fire protection doesn't help that much if the electrical room explodes with enough force to remove three walls.

      • lol.

        If your site truly needs availability, you need a second universe!

        Earthlings and their sub-15-nines (1 second in 30 billion years) availability...

    • What type of contingency you are planning for?
      - Do you need employees? At least where I work, you can't make employees work in the dark. You have to evacuate the building. As such, your diesel generator must be sized to power the lighting, not just the server room.
      - Do your servers need heat? or cooling? Do your employees need heat? The diesel generator must be sized to handle heating and cooling.
      - Does your ISP have a diesel generator? The local telco? Are your running an internet site? If your

  • by Anonymous Coward

    You haven't provided enough information. To answer your questions we'd need to know how many racks, how many watts or Vamps per rack, or even the type of servers you're running. On top of that you mention that cost is an issue, but you don't mention a budget.

    Without having that info imagine the following scenarios:
    1. You have 1-2 racks with 4-5 piece's of equipment each
    Get a Large APC (or comparible unit for each rack)

    2. You have 1-2 racks halfway populated
    Get an expandable h

  • Network UPS Tools (Score:2, Interesting)

    by Yonatanz ( 798506 )

    The open source world has NUT [networkupstools.org] to offer (Network UPS Tools).

    We've been using it at work for all our critical servers. It works with pretty much all UPSes, and on pretty much any production OS, so you can use your existing servers and just buy whatever hardware the budget affords.

    The linux/unix servers and clients are excellent, and there is a reasonable Windows port for the client (which we've modified a little to suit our needs).

    The cost is just your sysadmin's time, as with all F/OSS solutions.

  • HP (Score:4, Informative)

    by Thnurg ( 457568 ) on Saturday February 13, 2010 @02:26PM (#31128636) Homepage

    We have had good experiences with the HP R5500 XR [hp.com]. You may require a smaller and cheaper model like the R3000 or R1500 depending on your servers.
    These UPS are fully supported by NUT [networkupstools.org].

  • And one UPS per rack. Is that like 2 servers each?

     

    • Re: (Score:2, Informative)

      by lukas84 ( 912874 )

      Meh, a 5500 VA UPS can drive a rack full of low-end 2U servers.

      • That wasn't the point being made, the point made was there are 30 people and at least two (most likely more) racks worth of servers. So probably more servers than people.

        Which if they do any sort of hosting or "software as a service" provision, isn't strange at all.

  • APC SmartUPS (Score:5, Informative)

    by ircmaxell ( 1117387 ) on Saturday February 13, 2010 @02:27PM (#31128644) Homepage
    I have 2 3000 watt APC SmartUPSes per rack. They have both Serial and USB notification. Since each rack has about 25 servers, I get around 25 to 40 minutes of runtime for each server. So I have a small PC for each rack that monitors those 2 devices. It connects by serial to the upses, and runs CentOS. Then I have APCUPSD installed and configured in multi-ups mode. On each server, I simply install APCUPSD (There is a windows version), and tell it which UPS it is on. I also configure the appropriate shutdown parameters (20 minutes of battery left for non-critical servers, 15 for DC, and 5 for other critical servers. I also hooked each UPS monitor into Nagios and Munin, so I can track each one's power output and time remaining. So far, it's worked great over 2 "brownouts", and 1 total power failure (a test where I simply tripped the appropriate breakers).

    The rational behind having dedicated UPS monitors, is that I don't really care if the loose power while running, so I have them set to never shut down from UPS activity. Then, I simply implemented a script that on power restore issues a netboot command to each server under its control (configured with puppet for Linux, AD for Windows). That way, the whole system (all servers) automatically shut down, and turn themselves back on even if they never really lost power... So far, it's worked flawlessly (and with nagios, I get a text message on my cellphone within a minute or two of a UPS switching to battery (we have 2 dedicated internet connections that are on different power sources and different UPSs.

    I hope this helps!
    • I have 3 racks of gear and 1.5 racks of UPS. It's kinda ridiculous but I get around 1.5-2 hours of full server room power.

      I got one full UPS rack at a firesale last year when the economy tanked :D
  • by darkjedi521 ( 744526 ) on Saturday February 13, 2010 @02:29PM (#31128654)

    Its time to break out the calculators and do some math. There are two main factors at work here, UPS load capacity and battery run time. I run a series of research clusters at a university, so only the core systems (landing pads, schedulers, auth, disk arrays) are on UPS and all the compute nodes just die at a power hit.

    Retrofitting a datacenter for whole center UPS is a very daunting and expensive task, so odds are good you'll be replacing the current rack mounts with beefier units, either pedestal sized units next to their racks or rack mounted units.

    When buying UPS gear for work, I aim to hit either 67% capacity with the planned load, or the smallest VA rating that takes 208V single phase, as long as its at least 1/3 under utilized for future expansion. That covers the VA rating. As for battery run time, most of the larger units accept external battery packs to increase the run time. I've never used them, since a 5KVA unit with my load gives me 20 minutes of run time, and if the power isn't back on by then, odds are good its not coming back any time soon.

    Another option for extending UPS run time is to prioritize services/VMs. With the appropriate monitoring software on each host, you can configure each host to shutdown when the UPS estimates X minutes of battery time remaining or there have been Y minutes on battery, or both. Less load, more run time for the really important stuff. Almost every UPS I've used (APC, Tripp-lite, Powerware) comes with off the shelf software or there are opensource solutions (apcupsd, nut) for monitoring the UPS over serial, USB, or SNMP (Options vary with mfg and model). My shutdown schedule is: after 5 minutes on battery, power down the compute cluster landing pads. With 10 minutes remaining, power down the file servers with the archival data on them. With 6 minutes remaining, power down the primary file servers. With 2 minutes remaining, power down the auth box/network monitor/iLom control host (This is the only one that can't get powered on/monitored remotely).

  • Inverters (Score:5, Informative)

    by mukund ( 163654 ) on Saturday February 13, 2010 @02:30PM (#31128666) Homepage

    I use a Su-Kam [su-kam.com] inverter at home. It powers a whole room, has a clean sine-wave output (unlike traditional UPSes), and its switchover delay is small enough that the SMPS in computers handle the switchover to battery power properly.

    It uses two large lead-acid multi-cell batteries [wikipedia.org] (~car batteries) for storing charge. The last time there was a major power cut, it powered my computer systems for 10 hours (yes you read that right... 10 hours.)

    I was laughing at the old APC UPS which did 10 minutes before I had to power down.

    This is India btw.. power cuts are common.

    • Re:Inverters (Score:5, Informative)

      by mukund ( 163654 ) on Saturday February 13, 2010 @02:35PM (#31128718) Homepage
      Oh and I forgot to add.. the whole setup cost me ~$600 including installation. Maintenance costs about $1 / month (topping up distilled water levels). Also the 10 hour duration I quoted above was the duration before mains power came back on. I suppose it would work for quite a bit longer, but the power hasn't been out for a longer duration. Also, this is a "home" sized product. You can get larger solutions if you want longer backup. Many companies use such solutions in India to beat the power cuts.
    • "I was laughing at the old APC UPS which did 10 minutes before I had to power down."

      I grab those and connect them to car batteries, both as battery tenders in my shop and for UPS use.

  • Larger UPS (Score:3, Interesting)

    by JWSmythe ( 446288 ) <jwsmythe@nospam.jwsmythe.com> on Saturday February 13, 2010 @02:30PM (#31128668) Homepage Journal

        It sounds like you may have outgrown the traditional "UPS". They're fine and dandy as long as you're only powering so much equipment. There are some huge options (large in physical size, and more so in price).

        A decent alternative may be a DC power room, with generator backup.

        Basically, you have banks of batteries, with true sine wave power inverters on them. The power coming in goes to charge controllers. Depending on how you set up, these can get pricey too. There are some nice (and expensive) units that handle both the charge controlling and inverting, and will automatically switch between the incoming power and batteries. Look at the higher end Xantrex units, made for on/off grid purposes.

        The less expensive way would be to break up your battery banks by power circuit. Say a 15A power circuit per set. Put a dependable inverter on the rack side of the batteries, and a good charge controller on the line side. Separate inverters for each circuit may not seem like the best idea, and the overall efficiency will hurt because of it, but an inverter failure will only mean one circuit goes down, not the whole place. It's affordable to keep a few spare $300 inverters on hand, where it's harder to ask for a few spare $3,000 inverters.

        You'll also want an automatic crossover, if your line power should fail, you can bring up a generator. The batteries shouldn't be intended to last for hours. They should only last as long as it takes to bring up the generator (say 1 minute). Expect that there may be generator problems though. In a prolonged outage, you may need to shut down the generator to refuel, so the batteries may need to last for hours. At very least, if your generator fails, and line power doesn't come back up, you have that hour to gracefully shut down your equipment.

        Such a setup can be made to make your company more "green" too. Are you in a situation where you could put a large array of solar panels on the roof, and have enough battery power to last you through the night and then some? You could bring your power bill down to almost nil, or possibly feed back to the power grid (with the appropriate permission and power meter), and make a little money in the process. The long term savings may warrant a raise for you. :)

        There are plenty of consultants that can evaluate your needs, and provide the appropriate solutions. As you talk to various consultants, several will say the others are giving you bad advice. Look at all of them, and research them for yourself before making a decision. Remember too, it's in *their* best interest to sell you the most expensive units possible, while you probably want the most reliable and cost effective.

  • by Hadlock ( 143607 ) on Saturday February 13, 2010 @02:33PM (#31128686) Homepage Journal

    What's the cost of a good set of UPSes vs simply migrating to a Colo & fatter pipes? Datacenters (most of them anyways) promise at least a few hours of generator uptime, and it sounds like you're already using a colo somewhere (dns relocation, etc).

    • by JWSmythe ( 446288 ) <jwsmythe@nospam.jwsmythe.com> on Saturday February 13, 2010 @03:19PM (#31129036) Homepage Journal

          I think he was talking about in-house servers, but I could be mistaken. it's good to be in a *GOOD* datacenter that has the proper redundancy. Most of the good ones have multiple generators and tens of thousands of gallons of fuel stored. They can stay running indefinitely, assuming they can get fuel supplied before they ran out.

          I did work in one good one. They had a DC powerplant to supply at least 24 hours of power. They also had two diesel turbine generators, and something like 10,000 gallons of fuel, which would provide power for 7 days. In talking to the senior techs who had been there an awful long time, they said the generators had kicked on quite a few times. Only once in about 20 years had they needed to refuel. It got touchy. The power was out for about 14 days. It took 6 days to get a refueling truck in, because it was a nasty blizzard, and all the roads had been closed for days. They were starting to notify the customers of a potential power outage, when two fuel trucks finally arrived. One refilled their tank, and the second was left parked there, in case power wasn't restored in time.

          That was a huge facility, and they had the power to say "bring us trucks now", and not be put off for larger customers.

          I wasn't impressed by the advertised specs of the site. They were good, but it's easy to lie about the specs. I *was* impressed by the site, when I walked through, and was allowed (with an escort) to see their primary data room (many OC192's), the DC power room, and generators. I wasn't getting the sales tour. I was getting the tech tour, because the senior guys wanted to tell me all about their stuff, and we had a chance to talk about all of it.

          I've been to many datacenters over the years, and many have failed to be as good as their advertising made them sound. N+1 generators can be a few 11Kw generators out by their dumpsters, or massive industrial generators. Maybe they test them once a year, or once a week. Maybe they work, maybe they don't. It's less than impressive to see the generators sitting outside, covered in rust, and looking like they were purchased 2nd hand and hadn't been maintained since 1950.

          At one site (again, an impressive site), they had an absolutely huge DC room, and I was there a couple times when the received phone calls to turn on their generators because the power company needed the extra capacity. A couple 1Mw generators may make the difference between constant power, and widespread brownouts.

          The impressive datacenters were way beyond anything I could possibly talk my management into doing in-house.

  • by chill ( 34294 ) on Saturday February 13, 2010 @02:46PM (#31128794) Journal

    Co-locate your equipment at a carrier-grade data center in the nearest major city to your location and get a leased line to your premises. A decent data center will have proper battery backup and generators and know how to handle it. They'll also have the time and manpower to do proper tests, etc.

    • The colo can have a number of benefits, but from a cost and complexity perspective it will fail. 2x20kW UPSs with a transfer switch for each rack will pay for itself in under a year. The only exception to this is if you need a backup generator and are in a leased, multi-level building. Then the payback period might hit 3-5 years, especially if you can get cheap gigabit links to a colo.

  • As you mention virtual servers, I'm going to guess that part of the problem is that you have large-ish servers.

    My suggestion would be that if you have different uptime requirements for different services, to segregate them to different machines with a dedicated UPS.

    Our office has between two to four 3000VA MGE Pulsars per rack, depending on how much power they draw and how long we need to keep things up. (Although APC now owns them, the MGEs are more power-dense than the APC Smart-UPS line, as they're onl

  • One challenge is knowing how many VA your servers draw, which varies depending on how much RAM they have, how many disk drives, and even how busy the servers are. There is no boilerplate information that can help you with this. To spec your UPS's properly, you need to connect a power meter to each group of servers and monitor the power consumption under typical load.

    Once you have an accurate idea of the load, you can look at UPS manufacturer's data to determine how much runtime to expect. For example, if a

  • Might be lot faster.

  • Eaton is excellent (Score:2, Interesting)

    by wysiwig3 ( 549566 )
    I moved away from monster APC & Leibert units a bit over a year ago, and I'm so glad. I encourage you to look at the Powerware BladeUPS units. Each provides 12kW capacity, with internal batteries and the ability to string two additional external battery modules (EBMs) for increased time. In addition, the unit is stackable up to 6 high in a cabinet yielding 60kW (in an N+1) configuration. You can grow it as you need it. Nice Web/SNMP card that can be added for all the info you could want. With the
  • Not enough information to make a decision here. 30 end users with some virtual servers. . . What's the impact (co$t) of downtime? What sort of traffic do these systems support? Some virtual servers How many physical servers and what are the network, power and cooling requirements. You probably don't want your UPS to run these systems when cooling has been out for hours.

    So, work out what financial impact downtime will have. Then you can start looking at options. At one end of the scale, move everyt

  • Comment removed based on user account deletion
  • With most UPS systems on the market, the battery is sized for an estimated 5~10 minutes of run time for the power capacity of the UPS. There are actually two things to consider: The storage capacity of the batteries and the power capacity of the UPS. So, if you have a 1400VA UPS, you'll probably only have a couple 12V 17AH batteries. To get a longer "runtime" commercially, you would have to get a higher capacity UPS. This would cause you to buy a heavier duty UPS than you really need and spend a lot more f

  • Have you considered trying a bunch of these [tinyurl.com]?
  • Colocation (Score:3, Interesting)

    by liam193 ( 571414 ) * on Saturday February 13, 2010 @05:08PM (#31129908)

    With the current availability of fairly inexpensive bandwidth, why are you running servers at your location? There simply isn't much justification for any business not in the fortune 500 to go the route of "build your own" Catacenter. If it must be up, look at the option of renting rack space from a Telecom provider that takes care of generator power for you. Most of these will do a rack for a couple hundred a month that includes the generator backup. You may need to get a small UPS that handles the "blip" until the generator kicks in (they usually tell you that you need a few seconds of UPS), but it sounds like you already have units to put at the bottom of the rack that will handle that. You then have servers that will survive as long as the provider has fuel. Anything else is going to cost you far more. Most likely you can find one that will provide decent bandwidth from your location to theirs and provide you with an Internet connection at the Colo that is less expensive because it doesn't have the local loop to your facility. This probably would offset much of the cost for bandwidth that you will need from your office to your servers at the Colo.

  • by Hymer ( 856453 )
    An UPS is not an alternative to redundant PSUs, it just seems that way until your PSU fails.
    I've got a server room with about 150 servers (physical and virtual) every physical server got redundant PSU and the whole room runs on a PowerWare 9305 30 kVA UPS.
    ...and yes I should have redundant UPSes too, I just dont have the room for another one.
  • I don't need any fancy new UPSes, but I sure could use a whole lot of cheap ones - maybe 150KWh total capacity. Where's somewhere to buy the lowest $:KWh that can actually be gotten from some old ones, even if they're a little worn out, so long as they'll last another 5 years at that superior $:KWh? Even if they fill a whole room.

On the eighth day, God created FORTRAN.

Working...