Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
IT

Hot Aisle Or Cold Aisle For Containment? 181

1sockchuck writes "Separating the hot and cold air in a data center is one of the keys to improving energy efficiency. But containment systems don't have to be fancy or expensive, as Google showed in a presentation Thursday, which discussed the use of clear vinyl curtains in isolating hot and cold aisles. Containment systems have been in use at least since 2004, but there's an ongoing debate about whether it is best to contain the hot aisle or cold aisle. Leading vendors are split as well, as APC advances hot aisle containment while Emerson/Liebert champions a cold aisle approach. What say Slashdot readers? Do you use containment in your data center? If so, do you contain the hot aisle or cold aisle?"
This discussion has been archived. No new comments can be posted.

Hot Aisle Or Cold Aisle For Containment?

Comments Filter:
  • by shogun ( 657 ) on Saturday May 01, 2010 @05:35PM (#32059156)

    I thought this article was about supermarkets, where it might be a good idea anyway too..

    • by arielCo ( 995647 )
      I knew Google is keen to expand into new markets, but Oracle?
    • Re: (Score:3, Informative)

      by burne ( 686114 )

      Dutch supermarkets are doing that. Test your dutch: original [amsterdam.nl] or test google translate: translated [google.com].

    • Re: (Score:3, Funny)

      I'm disappointed that I don't see a single Tesla vs. Edison reference. :(

      • by Z00L00K ( 682162 )

        I didn't know that there is an Edison car out there. I know of the Tesla.

        • by Miseph ( 979059 )

          It's very much like the Tesla, even uses the same technologies. But it's much less efficient, cheaply made, has a much lower range, needs to be replaced 3 times more often, has a much better marketing department and only costs 60% as much if you buy RIGHT NOW (note: price subject to increase after promotional period).

          Oh, it's also not so much a car as a shiny metal box that makes whirring noises and hits you in the crotch with a hammer... but it also bounces a little, so eventually you might bounce where yo

  • by e9th ( 652576 )
    Depending on how your facility is ducted, it might not cost much to try both options and measure the results. Even if you have to spend a few thousand doing so, the long term savings from choosing the best method for your site would probably be well worth the cost of testing.
    • by twisteddk ( 201366 ) on Saturday May 01, 2010 @06:12PM (#32059420)

      Well, as most companies that have to build a new datacenter will tell you. It's cheaper to generate heat than cold. So I'd go for cold containment. Generally speaking most companies do AIM to put their new datacenters as close to the north pole as possible simply because it's cheaper to use outside air that's natually cold. That puts countries like Canada, Greenland, Denmark, Norway, Sweden and Finland in high demand for datacenters (end technicians to staff them). If the US didn't have rediculous data laws, Alska might also be ideal.

      In our new datacenter we're even using the excess heating from the servers to heat the offices ontop of the giant basements below. This sort of setup is ideal for outside temperatures that generally range below the normal cooling needs of a server (or several). But in any event there's still a huge bill to pay for moving the air back and forth, so containment is definately still an issue, as is the size of the pipes when you have say... 10MW of electricity going into your servers and quite a lot of that energy coming back out as heat ;)

      • by Thundersnatch ( 671481 ) on Saturday May 01, 2010 @07:55PM (#32059900) Journal

        That puts countries like Canada, Greenland, Denmark, Norway, Sweden and Finland in high demand for datacenters (end technicians to staff them). If the US didn't have rediculous data laws, Alska might also be ideal.

        Some datacenters perhaps, that don't need good Internet connectivity. But the latency between major populations and the far North makes those locations less desirable. We have struggles with the latency between Chicago and Dallas with some applications; Chicago-to-Fairbanks would be quite a bit more painful.

        • by starfishsystems ( 834319 ) on Saturday May 01, 2010 @10:04PM (#32060558) Homepage
          You can build a switched network to connect the remote data center to the point of presence where you want it to join the backbone.

          Though this does nothing to mitigate time-of-flight latency, it nicely eliminates the latency and jitter issues due to routing. It's what we did at Westgrid to connect our computing clusters to storage facilities many hundreds of kilometers away.
      • by ZorinLynx ( 31751 ) on Saturday May 01, 2010 @08:18PM (#32060016) Homepage

        >10MW of electricity going into your servers and quite a lot of that energy coming back out as heat ;)

        All of it. The laws of thermodynamics are clear.

        Sorry, I can't help being a smartass sometimes. ;)

        • Well if you want to be strictly correct, and it seems that you do, some of it will be converted to acoustic noise and escape through the walls, or be transmitted out through the wires or end up changing the magnetic potential energy in hard disk platters.

      • by loshwomp ( 468955 ) on Saturday May 01, 2010 @08:49PM (#32060164)

        Sweden and Finland in high demand for datacenters (end technicians to staff them).

        Why would you want to staff your datacenters with proctologists?

      • Well, as most companies that have to build a new datacenter will tell you. It's cheaper to generate heat than cold. So I'd go for cold containment.

        I agree - contain the cold to keep it cold (keeping heat out of that area, really). Cooling works not by "adding cold" but by removing heat, and the retired HVAC engineer I know (heating and cooling, not hack/virus/anarchy/carding) insists that cooling always requires more effort and care than heating. He did a lot of work for big clients that include Anheuser
      • My apartment is backwards. The AC runs almost constantly in the summer and the heater runs about 50% of the time. In the summer, my bill is consistently around 1350 kW/h but almost 1500kW/h in the winter. It seems my heater uses over twice as much electricity.

        • by Surt ( 22457 )

          That is a lot of kw/h. Over a million kwh in the winter months? You run a million dollars worth of power through an apartment per year? I'm impressed the lines hold up.

          • I don't think you understood that right. I used to work in the retail electricity industry. Take a look at your electric bill an you'll see similar numbers. They might be as high as 3000kW/h or as low as 500kW/h.

            • by Surt ( 22457 )

              I'm pretty sure my bill comes in kwh, not kw/h. And that was the point of my joke.

        • It depends on what type of heating system you have. Your heat load will also be affected by whether you're on a top floor or bottom floor as well as your exposure to the sun (i.e. if you're on a bottom floor and don't get much sun exposure, you won't have as much heat to remove from your place in the summer, but you won't get as much "free" heat in the winter).

          If you have the older style "strip heat" (which is basically a series of resistive heating elements) you will always use more energy for heating.
      • by Roger W Moore ( 538166 ) on Saturday May 01, 2010 @10:36PM (#32060744) Journal

        It's cheaper to generate heat than cold. So I'd go for cold containment.

        Actually containing both might be best since then you will have a "room temp" air gap between the two and air is a fantastic insulator. IF you do not contain the hot then the heat will diffuse and the air on the other side of the vinyl curtain will be warmer than room temp. This will warm your incoming cool air. The effect may not be particularly noticeable but it would be an interesting test to see if there is a noticeable improvement to doing both.

      • by Z00L00K ( 682162 )

        Iceland would be even better since their weather is always on the cold side. And they have close access to geothermal energy too so you won't need to burn oil or use nuclear energy to power your computers.

        Just hope that you haven't built your data center on top of a volcano.

        In any case - one thing that usually is forgotten is that you can be able to cool data centers by using water in a river. The water is usually relatively cold even in hot summer days.

    • Re: (Score:3, Insightful)

      by pla ( 258480 )
      Depending on how your facility is ducted, it might not cost much to try both options and measure the results.

      Call me naive, but... Why not do both at once?

      Cold air goes in from the bottom (or one side), through the rack, and hot air goes out the top (or the other side). I realize that companies don't really care about such minutiae, but that would allow the mere humans that occasionally need to service all those expensive racks to experience a temperature other than 40F or 120F.

      Or, hey, how about ju
  • Cold (Score:5, Funny)

    by Waffle Iron ( 339739 ) on Saturday May 01, 2010 @05:43PM (#32059216)

    What say Slashdot readers? Do you use containment in your data center? If so, do you contain the hot aisle or cold aisle?

    I think that I speak for most readers here when I say that it's pretty much all cold aisle down here in my mom's dank basement. Not much containment either, other than some pegboard partitions.

    • Re: (Score:3, Funny)

      by Anonymous Coward

      Yeah, it's definitely cold down there. But your mom was hot! The lack of decent containment did spoil the fun somewhat!

  • by mysidia ( 191772 ) on Saturday May 01, 2010 @05:44PM (#32059222)

    Contain and exhaust your heated air, vent it up outside

    That way it doesn't mix with the cold air much.

    If you just contain your cold air, then you have a situation where the hot air is staying in the room, and that heat will be absorbed over a larger surface area, by all the things in your server room (including the Air handling units).

    • Re: (Score:3, Interesting)

      by nacturation ( 646836 ) *

      You can still vent the hot air elsewhere, but the problem with hot air only containment is that then the entire room is effectively one large cold aisle, contained within the walls and the limiting factor is how well insulated the walls are. If that logic holds, it's better to limit the size of the cold aisle as you can add a lot more really good insulation where appropriate to limit unwanted heat absorption.

    • Re: (Score:3, Insightful)

      by SuperQ ( 431 ) *

      I think containing the hot isle is probably the best way to go as well.

      * When I'm working in a datacenter I'd rather be walking around in the cold isle (~70-80F in a modern datacenter) than the hot isle (100-120F if properly contained)
      * Containing the hot isle and to a small space and using the rest of the air and space around the rack (up to the ceiling, walking isles, etc) allows more volume of cool air to be a buffer in case of low/failed cooling capacity.

      • I think containing the hot isle is probably the best way to go as well.

        * When I'm working in a datacenter I'd rather be walking around in the cold isle (~70-80F in a modern datacenter) than the hot isle (100-120F if properly contained)

        This is probably diverging a bit on the original question, but seeing your 70-80 "modern datacenter" range reminded me of something I've wondered about lately: Has anyone researched the tradeoff point between when the server cooling fans start spooling up and turning the temperature up to run a "hot" floor? Running fans at a higher RPM certainly translates into more current draw than if they're running at their lowest speed. Sure, the equipment can stand running hotter and you're being "green" by not runnin

        • I think containing the hot isle is probably the best way to go as well.

          * When I'm working in a datacenter I'd rather be walking around in the cold isle (~70-80F in a modern datacenter) than the hot isle (100-120F if properly contained)

          This is probably diverging a bit on the original question, but seeing your 70-80 "modern datacenter" range reminded me of something I've wondered about lately: Has anyone researched the tradeoff point between when the server cooling fans start spooling up and turning the temperature up to run a "hot" floor? Running fans at a higher RPM certainly translates into more current draw than if they're running at their lowest speed. Sure, the equipment can stand running hotter and you're being "green" by not running the A/C as much, but are you just trading that for extra power wasted on spinning a whole lot of fans faster?

          That really depends on datacenter design. There are lots of factors that affect it. While my place is not a datacenter, we do run a pretty decent sized stack of servers. The cooling (provided from the wrong side of the room sadly) needs to be set in the low low 60's range to keep the temperature in the server area at about 80 degrees.

          We will soon be installing a hot air containment and venting system to help with our cooling. It should help considerably. And during the winter, the hot air will be blow to

        • Re: (Score:2, Informative)

          by Daengbo ( 523424 )

          A recent article on Google's data centers said that they run as close to maximum temperature as possible: if the servers are rated to 90, they only cool to 88. Google is extremely efficient. The article said that the energy overhead for their data centers is only about 20%, while most data centers run 100%. Because of that, I'm sure Google has studied the server fan issue and determined that it's not a significant factor.

          • The pictures I've seen with Google servers are caseless and don't have many fans [arstechnica.com] other than CPU and probably one in the PSU. I don't think that Google can be compared to the typical rackmount server the rest of us would use.

          • A recent article on Google's data centers said that they run as close to maximum temperature as possible: if the servers are rated to 90, they only cool to 88. Google is extremely efficient. The article said that the energy overhead for their data centers is only about 20%, while most data centers run 100%. Because of that, I'm sure Google has studied the server fan issue and determined that it's not a significant factor.

            Google also uses cheap commodity hardware that they have every expectation of failing unexpectedly (one of their known unknowns). Since they design massive redundancy with this in mind, it is not a problem for them. If your systems are not massively redundant, then it becomes a problem. The hotter you allow your systems to run, the greater chance you have of components failing...especially hard drives.

      • You've hit on an important point here: human beings do in fact do some work in this space. Do you really want the guy racking your servers to have wet, sweaty palms?

    • Re: (Score:3, Interesting)

      by Bigjeff5 ( 1143585 )

      I suggest containing the cold air.

      If you contain the hot air you must cool a much larger area, which is very inefficient and makes anybody who must work in the server room less comfortable when compared to allowing waste heat to warm the main areas. More comfortable, less energy wasted cooling the cold aisle, and less energy wasted venting the hot aisle.

      A vinyl partition is plenty of separation, and if you want to upgrade, use two vinyl partitions separated by an air gap. That's the same basic setup that

      • by pla ( 258480 )
        I think you meant that as humor, right? But in case not...


        and makes anybody who must work in the server room less comfortable when compared to allowing waste heat to warm the main areas

        You work in Siberia, perhaps? The hot side of even a small server room, nevermind a data center, stays well over 100F. Not exactly comfy for most humans.


        Temperature lost through seepage from solid objects is going to be minimal, at best, unless they are made of large sections of aluminum or copper.

        Or, say, row
    • Contain and exhaust your heated air, vent it up outside

      As others have noted, you are then cooling a much larger space. I don't know that one is significantly better than the other. The cynic in me says that companies that like their employees will have the isolated space be the one that most resembles the predominant environmental extreme (i.e. warm isolated in hot climates), and companies that like their employees to suffer do the opposite (i.e. pipe cold air to the backs of the racks, making the room 85degF in July.

  • by thewiz ( 24994 ) * on Saturday May 01, 2010 @05:46PM (#32059244)

    We only resort to using containment when the servers have been very, very naughty. We've found that chains, steel cable and duct tape are the best ways to keep servers in their racks.

  • Cold (Score:5, Insightful)

    by KingDaveRa ( 620784 ) on Saturday May 01, 2010 @05:49PM (#32059264) Homepage
    If we were to retro-fit it at work, I'd say cold aisle. To do so would mean curtains at the end of the aisles, as the under-floor vent grids are in front of the racks. The CRACs are at the end of the room sucking in air through the top, so it'd be cool air pumped up through the floor, into a cold-only zone, sucked through the racks, blown out the back into the rest of the room where it just swirls about until it's pulled into the CRACs again. I reckon it could be done cheaply and quickly. Do do it with the hot aisles would require more containment to get the air back to the CRACs. I think it'd be a case of which air flow it fits best.
    • Yeah, what you do is going to depend on how your room was designed. Mine is 100% ducted, so it doesn't matter. Just drop curtains down from the ceiling to the top of the racks and bam, done. Each cold aisle has supply vents and each hot aisle has its own return. I don't know if you'd call that hot or cold aisle containment; I suppose it's containing both.

      What you're describing with underfloor cold supply already has hot/cold containment: the raised floor. Simply extending that to enclose the front of the ra

      • That was pretty much my thought. Or other data centre is a horrible mess of cages, walls, and wall-mounted AC units. It's being slowly closed down, luckily!
        • Wall mounted A/C? Icky. I've only seen that in one other place that my local competitor calls their "business class colocation facility". They have the things randomly all over the place. I can't imagine any way to tame that kind of airflow.

          • It started as underfloor. Great big CRAC. However, it died, the parts were no longer available, and getting a new CRAC in was apparently impossible. So the wallmounted units went up. We've got 12 of them, and they barely keep up. Many are blowing into the back of racks. It's a right mess. There's massive hot and cold spots. As I say tho, the room is being wound down, so there won't be much left in there soon.
  • What did I find when I joined the sysadmin team at my place?

    Putting cold air vents behind the racks doesn't help. Pull cold air through the front to the back? Nope. Chill the exhausted air because it sucks to walk behind the servers. Nice.

    • What did I find when I joined the sysadmin team at my place?

      Putting cold air vents behind the racks doesn't help. Pull cold air through the front to the back? Nope. Chill the exhausted air because it sucks to walk behind the servers. Nice.

      We used to have all of the servers aligned to face the same direction...so like 5 or 6 rows of racks where only the very first row consisted of servers not pulling in the exhaust from the row across the isle. Now the warm exhaust isles are nice places to thaw out if you've been at a kvm directly in the flow of the cold air.

  • Hot or Cold? (Score:5, Insightful)

    by siglercm ( 6059 ) on Saturday May 01, 2010 @05:50PM (#32059278) Journal

    Maybe I'm missing something obvious, but the answer shouldn't be complex. Base the decision to contain either hot or cold aisles on the differences to ambient temperature. If (HotT - AmbientT) > (AmbientT - ColdT), then contain the hot aisles. If it's the other way around, contain the cold aisles. This minimizes the entropy loss due to temperature mixing in the data center, I believe. Just my 2 cents.

    • by jo_ham ( 604554 )

      I think it's going too far though, if you start trying to work out Gibbs Free Energy change for your server room.

      • by jbengt ( 874751 )
        I'd like to know more about how he gets that entropy loss, though. He may have hit on the secret to making a perpetual serving machine.
        • Re: (Score:3, Funny)

          by jo_ham ( 604554 )

          More threads!

          More threads means more entropy!

          Get a big enough T x deltaS and the server will cool itself!

    • Re:Hot or Cold? (Score:5, Informative)

      by SuperQ ( 431 ) * on Saturday May 01, 2010 @06:29PM (#32059518) Homepage

      This sorta doesn't work because what you care about in datacenter cooling is maintaining a constant equipment inlet temp. For all practical uses this means your AmbientT and ColdT are the same. What you did get right is that you want the largest delta T in your cooling equipment to provide efficient cooling. No matter what you do with hot or cold "containment" the end goal is to keep the HotT as high as possible when it hits your cooling system.

    • You don't care about raw temperature transfer, you care about energy usage to either cool or heat

      Is it cheaper to cool? Or is it cheaper to heat?

      Can you save money and energy by containing the cool area and allowing the hot aisle to heat the rest of the room? Or is too much heat your problem, and you'd be cooling the whole space anyway, so cool the whole datacenter and contain the heat?

      Honestly, I think the best solution for almost all situations would be to contain both hot and cold aisles. Chances are yo

      • by networkBoy ( 774728 ) on Saturday May 01, 2010 @07:20PM (#32059740) Journal

        so far everyone's got it wrong.
        since you don't want unauthorized people in the DC you seal it up (with only exhaust ports and a door). Pump LN2 into evaps in the room. Authorized techs are issued Scott Air Packs. Unauthorized people expire before they can do damage, and as a bonus the room has built in fire suppression ;-)

        -nB

        • by TopherC ( 412335 )

          I like that idea! But the air inside might become too dry and cause problems with static electricity. I don't see any other problems.

    • by PPH ( 736903 )

      Hot, Ambient, and Cold temperatures. We're starting to get close to identifying the important paremeters. Another one is Outside ambient. You've got to figure the heat transfer from indoor ambient to outside ambient. And you've got to do that for (in some cases) a wide range of outdor ambient temps. Not only do you want to minimize the unwanted transfers between your cold side or hot side air to the indoor ambient, you need to consider the transfer for the overall building envelope. Each of these heat flow

  • by Jave1in ( 1071792 ) on Saturday May 01, 2010 @05:58PM (#32059308)
    The best solution is going to based on the average ambient temperature of your location. If you're in a hot environment, why contain the cold if you need additional A/C in the datacenter for employees? Reduce costs by using the same equipment to cool both. If you're in a cold region, then let the heat also warm the datacenter. If you're in an ideal temperature environment, then you don't have much to worry about beside good air flow.
    • by Inda ( 580031 )
      So, expanding the solution further, build one side of the data-centre in a cold environment, the other side in a hot environment. Air flow could be solved by elevating one end, causing natural ventilation (the "chimney effect").

      Or have I missed something?
    • Employees working in air conditioned comfort is a "nice to have". Equipment not overheating is a "need to have".

      Omitting the building HVAC is probably a dumb idea. If the servers get more energy efficient, they throw off less waste heat, meaning now the building is freezing.

      Designing an HVAC system for the offices that can utilize any available waste heat (or spare cooling capacity) is a GREAT idea, but if the power goes off and the diesels kick on, I wouldn't want to be limiting my runtime (or increasing

  • One critical thing is where are the HVAC return ducts and where the air vents are. Does the datacenter use raised flooring, or does the place have discrete ducts for its ventilation. I've seen data centers have 12-24" of clearance used as plenum space. Others may have 2-4 inches because the space is used just for wiring and not HVAC use.

    This needs to be factored in to separate the aisles, elsewise spending time for meat locker curtains and endcaps like Google has done may bring few to no returns.

    • by jbengt ( 874751 )
      If you're using the raised floor plenum for supply, the efficient way to do it would be tosupply directly into rack ecnlosures and use the entire room as the warm side. That'd be very efficient, but might take some thought about how you build and ventilate the rack enclosures
      If the air is supplied directly to the space, make the ceiling a return air plenum and provide ducted exhaust from the racks into the ceiling return and make the whole space the cold side (as illustrated in TFA). That's almost as effi
  • Datacenter cooling (Score:2, Informative)

    by JWSmythe ( 446288 )

    I've been in quite a few large datacenters. Some have strict rules on proper utilizing their hot and cold aisles. Some could care less.

    The ones with the best ventilation have the cold air coming through the raised floor, and the hot air being pulled from the ceiling. Brilliant. Actually understanding that hot air rises. :)

    Most

    Some I've been in had the hot air being blown from the ceiling, and the return somewhere on a vertical wall.

  • by Mhrmnhrm ( 263196 ) on Saturday May 01, 2010 @06:26PM (#32059490)

    I can honestly say you win either way. The electricity/cost savings of containment will pay for itself regardless of where you put the doors. That said, whether you choose to go HAC or CAC is really choosing between different trade-offs.

    HAC (The APC method): Seemed to be cheaper and easier to install. Since the hot aisle is being contained, if something happens to your coolers, you have a longer ride-through time as there's a much larger volume of cold air to draw from. However, at least when I got out of the business, HAC *required* the use of in-row cooling, and with APC, that meant water in your rows. Europeans don't seem to mind that, but Americans do (which provided an opening for Emerson's XD phase-change systems, dunno if APC has an equivalent or not yet). I personally wouldn't be too keen on having to spend more than a few minutes inside that hot aisle, either.

    CAC (The Emerson method): Seemed to be more expensive, especially in refit scenarios (they appeared to be more focused on winning the big "green-field" jobs more than upgrading old sites), but it can usually leverage existing CRAC units, so you could potentially save enough there to make it competitive, as well as avoid vendor lock-in. The whole room becomes the equivalent of a hot aisle, but convection and the building's HVAC can somewhat mitigate that, so it'll still be uncomfortable working behind a rack, it doesn't feel quite the sauna that an HAC system does. Depending on whose CRAC equipment you buy (or already have), EC plug fans and VSD-driven blowers can save even more money if properly configured.

    Other: I've seen the "Tower of Cool" or "chimney" style system, and flat out hate it. They look like a great idea on the face of it: much cheaper, faster installation, able to use building HVAC, etc. But let's be honest. Your servers are designed for front-to-rear airflow. So are the SANs, NASs, TBUs, rack UPSs, and practically everything else you've put in your datacenter, apart from those screwball Cisco routers that have a side-to-side pattern (Seriously... what WERE they thinking on that one???). Why would you then try to establish an upwards-pointed airflow that's got a giant suction hose at the center of the rack's roof, where it can just as easily pull cold air from the front (starving your systems) as it does hot air from the back?

    Personally, I like cold aisle better. If I'm going to be spending two hours sitting behind a server because I can't do something via remote (forced into untangling the network cable rat's nest, perhaps), I like the idea of being merely uncomfortable and a bit sweaty than dripping buckets while cursing the bean-counters who forced me to lay off the PFY two months ago. There are also some neat controllers that work with CRAC units to establish just the right amount of airflow to fully feed the row and manage their output, so if running five CRACs at 50% is more power efficient than running three at 100%, that's what they do. I know folks who like hot aisle better. It's more fun for them to show-off their prize datacenter since all the areas you'd want to see (unless you're the one responsible for power strips or cable management) are cool.

    • Just a note about those silly Cisco switches:
      Servers have holes in the front and acbk to facilitate cooling. They can do this because the boards can be oriented in a way to facilitate this.

      Cisco Rack enclosures have high-density blades in the front(no roon to breathe) and a sizeable backbone in the back (A wall of PCB).

      Due to the hotplug nature of the blades, the backbone has to be mounted at the back (instead of using riser boards like in computers). The only other way to have it at the side is by making t

  • contain the heat (Score:3, Interesting)

    by Tumbleweed ( 3706 ) on Saturday May 01, 2010 @06:59PM (#32059652)

    When you contain the heat, you then have the ability to move it around and use it for cogeneration, thus vastly increasing your overall efficiency.

  • Both (Score:3, Interesting)

    by funkboy ( 71672 ) on Saturday May 01, 2010 @06:59PM (#32059654) Homepage

    The answer will be specific to each implementation.

    But in general, it should be pretty obvious to anyone that understands basic thermodynamics: get the "cold" into the servers without mixing it with the ambient or letting it touch any hot metal, and get the heat out of the servers without mixing it with the ambient or letting it heat up any other metal.

    It should be pretty obvious that air is not really the best way to do this; air goes all over the place, and is not a very good thermal conductor (relatively speaking).

    There are entire 10k+ machine datacenters in France that use only liquid cooling circuits, right up to the servers. Energy costs for running the external condensers are a small fraction of what it would cost to do the same thing with air. Of course, it helps if you only have your own machines in such an environment, but if APC, Emerson, etc were serious about efficient cooling then they'd partner with HP, Dell, etc. to make standardized systems that would allow this...

    • There are entire 10k+ machine datacenters in France that use only liquid cooling circuits, right up to the servers.

      That's exactly what the server designers should be doing to the rack series servers at this point. Stop with the loud and inefficient air fans and replace them with built-in thermal conduction pipes inside the servers. Every rack should start coming with a master hose and a coupler system that we would connect to the server, prime the server before turning on for the first time, and let the s

  • all the air in a machineroom is either hot or cold. anything else means you're mixing - that is, your containment leaks. there is basically no heat transfer through building conduction, for instance. 'cold' merely means that it's between the chiller outflow and front of servers; hot means ass-side of servers and chiller intake. the primary goal is to keep them from mixing.

    a nontrivial machineroom will have multiple chillers and non-uniform heatload distribution. that doesn't change this principle, but

  • There are a lot of reasons why someone will be sitting in a server room for an hour or more. Please don't make it an unbearable hour with heat baking the poor humans.
  • Depends on the climate entirely. Here the summers are brutal and the winters severe. which one is contained is pretty much up to which one we want to be exposed to. (i.e. winter it is nice to have the heat NOT just dumped outside.) Curtains and thought to the existing doors/partitions help with that seasonal flexibility a lot.
  • Use Styrofoam(tm) (Score:3, Insightful)

    by sydbarrett74 ( 74307 ) <sydbarrett74NO@SPAMgmail.com> on Saturday May 01, 2010 @07:59PM (#32059920)
    I mean it worked for the McDonald's McDLT back in the 80's...
  • CYA. Flip a coin. That way, if you get complaints about the wrong choice, you can always blame the coin.
  • The whole point is to separate hot air flows from cold air flows.

    After that, precisely how you go about the process is more a matter of the side effect of other choices.

    This feels, to me, rather like arguing if it's better to put the women's washroom to the right or left of the men's.

  • Then it’s easy: Do the math. Seriously. It’s a question of energy transfer, the simplest thermodynamics, and isolation.
    I’m not a expert in those matters. But I’d say, the goal would be, to minimize energy (heat) differences between areas right next to each other?

    I’m sure there is an expert (please no wannabes) here who can quickly give a nice answer. :)

  • We have to choose between the glass being half full or half empty. But it's not the symmetrical choice which it might seem to be. Specifically, it's not a matter of providing cold air. That's just a means to an end. Fundamentally, it's a matter of removing thermal pollution.

    The ideal environment for the equipment is one which is uniformly, ambiently cold. Not only are there fewer thermal stresses, the entire design problem is simplified if you can assume uniformity. Departures from this ideal are th
  • Flexible barrier -> contains high pressure -> cold (or it would be ineffective).

    Rigid barrier -> equally applicable to high and low pressure -> hot (more convenient, can be used with varied fans speed).

  • APC Patented their hot-aisle containment, and that is the only reason they use it. While it is silly to think in terms of containing one or the other-- it is just separation-- you ideally want a high comportment for the hot aisle for stratification and stack effect. This prevents re-circulation and short-circuiting of hot air, and also makes the hot aisle a more bearable place to stand.

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...