Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

Ask Slashdot: Little Boxes Around the Edge of the Data Center? 320

First time accepted submitter spaceyhackerlady writes "We're looking at some new development, and a big question mark is the little boxes around the edge of the data center — the NTP servers, the monitoring boxes, the stuff that supports and interfaces with the Big Iron that does the real work. The last time I visited a hosting farm I saw shelves of Mac Minis, but that was five years ago. What do people like now for their little support boxes?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Little Boxes Around the Edge of the Data Center?

Comments Filter:
  • by Hatta ( 162192 ) on Thursday November 01, 2012 @06:46PM (#41847787) Journal

    I make them with ticky tack.

  • VMs (Score:2, Insightful)

    by Anonymous Coward

    put them in VMs!

    • Re: (Score:3, Funny)

      put them in VMs!

      Great Plan! If all your servers are virtual then you don't have to worry about diesel fuel when there's a hurricane!

      • Re: (Score:3, Insightful)

        by Anonymous Coward

        Uhhh. because the "little boxes" and individual servers run on unicorn farts and angel tears?

    • Re:VMs (Score:5, Insightful)

      by Nutria ( 679911 ) on Thursday November 01, 2012 @06:58PM (#41847927)

      Call me old school, but Unix/Linux are multi-tasking. Why not just run multiple services on one OS directly on the metal?

      • Re:VMs (Score:4, Interesting)

        by mlts ( 1038732 ) * on Thursday November 01, 2012 @07:44PM (#41848389)

        There are good reasons to separate functions. Mainly security. That way, if someone hacks the NTP server, they don't get control of DNS, nor do they get control of the corporate NNTP server, or other functions.

        The ideal would be to run those functions as VMs on a host filesystem that uses deduplication. That way, the overhead of multiple operating systems is minimized.

        What would be nice would be an ARM server platform, combined with ZFS for storing the VM disk images, and a well thought out (and hardened) hypervisor. The result would be a server that can take one rack unit, but can handle all the small stuff (DNS caching, NTP, etc.)

        • by Ost99 ( 101831 )

          Why dedup? Those VMs should not require more than 500MB-2GB each.
          Deduplication (inline) only adds complexity and sources of latency you don't need or want.
          Any small pizza box with 2x146GB drives (or 2x256GB ssd) in RAID1 should be able to handle any number of virtualized small utility guests without any deduplication.

      • It comes down to an issue of scalability.
        With the multiple services, on one OS. Means if some of the services gets popular, and needs more power then the server can handle. You will need to decommission and reinstall and configure the service onto an other server... And in the mean time your other services are often getting performance hindered. Virtualizing means if you need to move it from one box to an other it is a file copy away. vs. reconfiguring and testing.

      • Re:VMs (Score:4, Interesting)

        by marcosdumay ( 620877 ) <marcosdumay&gmail,com> on Thursday November 01, 2012 @09:39PM (#41849267) Homepage Journal

        Well, one of the reasos is that some services get hold of port 80 (or, a few times other ports), and don't want to share it. With virtualization you can share resources with those too... But yes, those services are a minority, and probably won't need a lot of resources...

        Another reason is that you may want to give different people permission to administrate different machines... But again, except for companies that sell hosting, that's an exception.

        A third reason is that you may want to replicate your environment for backups and testing... Except that you don't need a VM to do that on Linux. You just copy the files, add two devices to /dev and run the bootloader again. It's easier than backing-up a VM in Windows.

        And I've never heard about any other reason for virtualization. I can't also think about any other. I'm lost about why sudenly so much people wants it so badly... Ok, all datacenters added specialized machines for decades because of those first two reasons I gave you above, and get some benefit virtualizing them... But the core of a datacenter (the main databases, web servers - the machies that actualy spend the day working) should run on the metal, and altought I've met several people that arguee otherwise, I've never heard any argument for virtualizing them that holds any water.

        But now, I think, maybe the HA people should try to virtualize their clusters. They have a huge amount of redundancy, and consolidating several virtual machines in a single real one can help them reduce their costs. (Ok, if you are in doubt, no, I'm not THAT stupid, it's a joke.)

        • by Nutria ( 679911 )

          I'm lost about why sudenly so much people wants it so badly... Ok, all datacenters added specialized machines for decades because of those first two reasons I gave you above,

          I thought it was because young geeks and proto-managers grew up with the Curse Of Windows, where you had to run one service per machine, and then brought that flawed mindset into the Linux world.

    • by msauve ( 701917 )
      Good luck putting ntp on a VM (a host, maybe, but interrupt latency will kill you).
  • by steveha ( 103154 ) on Thursday November 01, 2012 @06:59PM (#41847937) Homepage

    I don't work in a data center. But I think you might want to look at an HP Proliant MicroServer.

    Basically it is an AMD laptop chipset on a tiny motherboard in a cunningly designed compact enclosure. The SATA drives go into carriers that are easily swapped (but not hot-swappable). It's quiet and power-efficient. It supports ECC memory (max 8GB) and supports virtualization.

    http://h10010.www1.hp.com/wwpc/us/en/sm/WF06b/15351-15351-4237916-4237918-4237917-4248009-5153252-5153253.html?dnr=1 [hp.com]

    Silent PC Review did a complete review of an older model (with a 1.3 GHz Turion instead of 1.5 GHz).

    http://www.silentpcreview.com/HP_Proliant_MicroServer [silentpcreview.com]

    SRP is $350, but Newegg has it for $320 (limit 5 per customer).

    http://www.newegg.com/Product/Product.aspx?Item=N82E16859107052 [newegg.com]

    Newegg also has 8GB of ECC RAM for about $55, so you can get one of these and max its RAM for under $400.

    I just got one and haven't had time to really wring it out, but I did do the RAM upgrade. Despite the tiny enclosure, it wasn't too painful to work on it, and I was impressed by the design. The Turion dual-core processor has a passive heat sink on it, and the single large fan on the back pulls air through to cool everything. (There is also a tiny high-speed fan on the power supply.)

    I'm going to use this as my personal mail server. It's cheap enough and small enough that I plan to have at least one put away as a hot spare; if the server dies, I'll power it down, move the hard drives to the spare, and I'll have the mail server back up within 5 minutes. Not bad for a cheap little box.

    • Re: (Score:3, Interesting)

      It's not rack-mountable. No IPMI either. That should be a deal-breaker for anyplace serious enough to have a rack.

      We try to virtualize anything that can be virtualized. But for those few tasks that really need to run on bare metal, we've had good luck with little Atom D525 Supermicro rackmountable boxes. We bought a few complete boxes (minus ram and storage) that Newegg billed as fanless (which was a lie). Those ran hot enough to develope problems after a few months. Ever since we've built ours up from part

      • And in some places that get a little *too* serious, you end up with some stupid proprietary appliance that can't be rack mounted but the PHB swore was needed. And for that, you will have one of these [rackmountsolutions.net]. And in the extra space next to said proprietary POS, you can put something like the abovementioned HP server.

  • ESXi (Score:3, Interesting)

    by nurb432 ( 527695 ) on Thursday November 01, 2012 @07:01PM (#41847959) Homepage Journal

    No little unsupportable boxes here.

  • by trandles ( 135223 ) on Thursday November 01, 2012 @07:02PM (#41847971) Journal

    Last generation's compute nodes. We keep some around for utility functions after decommissioning a large cluster.

  • by attemptedgoalie ( 634133 ) on Thursday November 01, 2012 @07:02PM (#41847973)

    Go get a GPS satellite receiver/time server. Actually, get two. Don't screw with time.

    THEN, virtualize the rest of the stuff. Monitoring, syslogging, management, patchers, etc.

    We've virtualized everything except for
    - a Windows DC so that it stays up if the vmware datastores or SAN eats itself in a horrible way.
    - The NIS server we have to use on our UX environment due to an ancient regulation. I'm not willing to put up HP-UX VMs for this right now, otherwise it'd be safe in a VM as well.
    - Anything we can't virtualize due to licensing/contract/support issues. So our VOIP environments, phone call recording, access control systems for the doors,

    My datacenter is getting a lot nicer to look at, and a lot easier to upgrade. I can shift servers or volumes all over the room so I can do live maintenance during the day.

    • by Anonymous Coward

      Note: GPS timeservers can vary widely in quality. Don't assume that the most elegant package, slickest website or cheapest price equates to a solid box (remember, realtime OS's can crash too ;).

      Some of the most reliable and precise timeservers I've seen have been home-built PC based boxes.. YMMV.

  • "Obsolete" hardware (Score:5, Interesting)

    by beegle ( 9689 ) on Thursday November 01, 2012 @07:03PM (#41847993) Homepage

    Those support tasks don't exactly push hardware to its limit, and most of those tasks are the kind of thing that demands a bunch of redundant servers anyway.

    Throw a bunch of "last generation" hardware at the task -- stuff from the "asset reclamation" pile. Leave a few more around as spares. Less disposal paperwork. Works just fine. By the time your last spare fails, you'll have a new generation of obsolete hardware.

  • amazon (Score:2, Interesting)

    by mveloso ( 325617 )

    For little boxes that deal with DNS, time, etc - put them in amazon. They're critical servers, but don't really need to be at your site. Put the primaries outside, and slaves on the inside. That way if you have an outage you can always repoint DNS to somewhere else...something you can't do if your primary DNS is on a dead network.

    • You want consistently fast behaviour from your time servers. Don't mess with virtualizing them.

      • by AK Marc ( 707885 )
        If an NTP request is served a little slow, what's the problem?
        • VM's do bad things to keeping accurate time they do a lot of funny business. There solution so far is it poke a hole though to the main OS to get time.

    • Virtualize NTP?

      Good luck with that...
    • For little boxes that deal with DNS, time, etc - put them in amazon.

      besides the NTP problems, also make sure to write on a piece of paper the IP of every computer on IT, then put it on a wall.

      When you have internet problems and nobody is able to get any work done anymore because all of the light services don't need to be at your site, you'll need those addresses for the LAN party.

  • You have a crash cart with a KVM (for the rare occasions you need to locally access two or more machines simultaneously) and that attaches to all the specialized cables for interfacing with your blades or full size servers, make sure it has a shelf for holding drives/ram/batteries and a bin for more specialized PS2/USB to Server convertors. Otherwise you sit at your desk and remote into EVERYTHING: VMs, Linux, Windows, iLO/etc. - HEX
  • VMs

  • by MichaelSmith ( 789609 ) on Thursday November 01, 2012 @07:16PM (#41848149) Homepage Journal

    I think its apalling that we do that. Its a horribly expensive way to work in hardware but we do it because we can't be stuffed to deal with operating systems. Most likely a single box and OS instance could do it for you if it was set up correctly.

  • If you (Score:5, Funny)

    by JustOK ( 667959 ) on Thursday November 01, 2012 @07:19PM (#41848173) Journal

    If you can't run it on your iPad, it's probably not worth running.

    --Management.

    • I'm picturing racks of overclocked iPads with a wall of box fans pointed at them.

      And then I'm imaginging the conversations that would inevitably ensue:
      "I know I fat fingered the fucking IPV6 address. YOU try typing on this goddamn touch screen"

  • I personally hate and despise people who put non-rackmount kit in racks...

    We use various devices.. mostly all 1ru servers of various configs... for eg there are a couple of mini-itx 1ru servers we have that have e350 based mini-itx boards (i really love the e350/e450 boards)... not quite as cheap as the hp n40 microserver, but at least its a rack format.

    Then we have a few that run virtualisation here and there for some tasks using kvm (some of those too have e350's in them as the e350's do have the virt'n e

    • by Skapare ( 16644 )

      I totally agree. Just populate your racks and pick some for "special duty" (and put your DNS, NTP, and monitoring daemons on there).

  • These little boxes are very common around data centers.
  • by sxltrex ( 198448 ) on Thursday November 01, 2012 @07:24PM (#41848225)

    I can't imagine trying to perform network management with a few mac minis so I'm assuming you're referring to a very small facility? Our new data center was built on 10-gig infrastructure and our NM is appropriately scaled--NetScout Infinistreams connected to Gigamon matrix switches. While the Gigamons were quite expensive they allowed us to utilize fewer Infinistreams while also providing some very cool functionality.

    It look a long time for our upper management (those with the dollars) to come around to the notion that, in order to realize the full investment made in the data center, true network management needed to be baked in from the start.

  • We are using a couple Soekris [soekris.com] boxes for some basic monitoring. They are lightweight atom processors with no active cooling and it's designed with networking in min. 4 Gig-E ports on the 6501, and you can get up to 8 more thanks to 2 PCI-E slots available in the rackmount version. Since we are using an mSATA SSD on the board we have no moving parts, so nothing mechanic to fail.

  • I like the same big boxes as are used for everything else. NTP server, running on a Mac Mini...really? Get a GPS-driven device that serves the purpose. They run an embedded OS, so they're very low-maintenance and straightforward, and they perform extremely well. As far as uptime/network/performance monitoring functions, these need to be at least as reliable as everything else. And the mainframe interfaces are awfully important...imagine how much good you'd be if you maintained you intellect but became

  • http://www.synertrontech.com/ [synertrontech.com]

    Some are fanless that I use for linux boxes, some are rackmount with multiple motherboards per 1U case, and their prices are add-ons are cheaper than newegg.

    Nope, don't work for 'em, just used their products for about 8 years now.

  • We don't have any management or service boxes. Everything is appliances (cisco/HP) or off site (exchange, CRM). Our AD servers act as the time servers for the hosting environment. We don't want to manage anything else as it all takes away from the bottom line and eats fairly expensive rack space.
  • If by "big iron", you mean "IBM Mainframes or similar kit", then your question has meaning.

    If by "big iron", you mean "lots of irritating PCs that I think I can add up into a supercomputer because all problems are amenable to parallel solutions", then your question is meaningless.

    Assuming the second, you are much better off just using identical hardware for everything, since it will mean you have the components on hand should anything go wrong, and it will mean that you have a single maintenance SKU. In th

  • by funkboy ( 71672 ) on Thursday November 01, 2012 @08:43PM (#41848889) Homepage

    ...I don't want it in my datacenter. If you have no budget for non-revenue-generating boxes for services like DNS, NTP, etc. then upgrade the server hardware you tore out of production after the last upgrade cycle with SSDs and low-wattage processors & put it back into service for your internal needs.

    Otherwise get a few Dell R210s or some other small cheap rack server with an IPMI 2.0 BMC and get on with your business. Any money saved by buying "mini-PCs" (or whatever you want to call them) for any datacenter computing hardware you plan to rely upon at all will be burned the first time you have to drive to the datacenter and physically babysit some cheap machine because it didn't have IPMI.

  • "And they all look just the same"

  • The use of discreet machines allows for a machine to be specialized for a task. Sometimes you just need fast number-crunchers for special types of numerical problems, and GPUs work well. Other times, tasks can be parallelized so a distributed computing model works well. For the accessory infrastructure of NTP, DNS, and forth, reliability is more important than CPU mips or memory bandwidth. Take the high-end servers of yesteryear, one that would have been put out to pasture, and use that for such things. De

  • Use 2 or 3 redundant low power enterprise class servers. Setup vmware or similar with automatic failover, and make all those "little boxes" into virtual machines. The benefit is that you can easily rehost the services even to your production vm solutions in an real emergency. Having them as separate VM's gives the same benefit of having them as separate little boxes (i.e. restarting the ntp server only affects ntp services, not you email as well). You have the added benefit of being able to easily deploy up
  • One-off boxes become a huge time sink, usually at the absolute worst possible time to do so. With two very viable options with Xen and ESX, put the time and care into setting up a stack with the nifty features you want -- redundancy options, ability to move VMs from one server to another, monitoring, out of band management, RAID, etc.

    Then you can set up the little management hosts, set up a VM for each one of those "little things", and also come up with a single way of deploying your operating systems so y

  • Why not hypervisors? (Score:4, Interesting)

    by SignOfZeta ( 907092 ) on Thursday November 01, 2012 @10:28PM (#41849563) Homepage
    I don't operate a datacenter, but for virtualized servers in an office, I always enable the NTP server functionality in the hypervisor, have it sync to a stratum-1 time source, then advertise that address via DHCP and DHCPv6 for my guests and workstations (and visiting cell phones) to use. Being the definitive time source, I also tell the hypervisor to automatically set the clock on the guests, then give a virtualized AD domain controller (if any) the PDC FSMO role to set the Windows domain time. I have sites with two or three hypervisors running NTP, and it seems to work well. Not sure if it will scale to your environment, OP, but it may be worth mulling over.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...