Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software Operating Systems

Why Do Computers Still Crash? 1533

geoff lane asks: "I've used computers for about 30 years and over that time their hardware reliability has improved (but not that much), but their software reliability has remained largely unchanged. Sometimes a company gets it right -- my Psion 3a has never crashed despite being switched on and in use for over five years, but my shiny new Zaurus crashed within a month of purchase (a hard reset losing all data was required to get it running again). Of course, there's no need to mention Microsoft's inability to create a stable system. So, why are modern operating systems still unable to deal with and recover from problems? Is the need for speed preventing the use of reliable software design techniques? Or is modern software just so complex that there is always another unexpected interaction that's not understood and not planned for? Are we using the wrong tools (such as C) which do not provide the facilities necessary to write safe software?" If we were to make computer crashes a thing of the past, what would we have to do, both in our software and in our operating systems, to make this come to pass?
This discussion has been archived. No new comments can be posted.

Why Do Computers Still Crash?

Comments Filter:
  • Simple ... (Score:4, Insightful)

    by Vilim ( 615798 ) <ryanNO@SPAMjabberwock.ca> on Tuesday May 20, 2003 @08:57PM (#6003281) Homepage
    Well, basically as software systems get more complex there is more things to go wrong. That is why I like the roll-your-own-kernel of linux. Don't compile the stuff you don't need and fewer things can break.
    • Re:Simple ... (Score:5, Insightful)

      by Transient0 ( 175617 ) on Tuesday May 20, 2003 @09:01PM (#6003316) Homepage
      More specifically... As hardware gets more complex, software gets more complex to fill the available space. More complex software not only means more things to go wrong but also means that the hardware never really gets a chance to outpace the needs of the software.

      Also, as I'm sure someone else will point out, it is very hard to right code that will not crash under any circumstances. Even if you are running a super-stripped down linux kernel in console mode on an Itanium, you can still get out of memory errors if someone behaves rudely with malloc().
    • by cscx ( 541332 ) on Tuesday May 20, 2003 @09:01PM (#6003319) Homepage
      Actually the Zaurus he mentions crashing in the article runs a roll-your-own Linux kernel... [linuxdevices.com] ;)
    • Re:Simple ... (Score:5, Interesting)

      by The Analog Kid ( 565327 ) on Tuesday May 20, 2003 @09:07PM (#6003377)
      Yes, on my parents computer, which has 2000 on it(tried Linux it didn't work for them). I set most of the services to manual that aren't needed. Disabled Auto-update. Put it behind a router ofcourse. The only problem remained was Internet Exploder, well I just installed Mozilla with an IE theme, haven't noticed a difference). I think killing most of the services keeps it up. Haven't had a problem with it. This was done before KDE 3.1.x so who knows Linux might work after all.
    • by jabber01 ( 225154 ) on Tuesday May 20, 2003 @09:32PM (#6003614)
      Software crashes because it's complex, yes, but that's just part of it.

      Jets are complex too. So is the Space Shuttle. Cruise ships. CARS are pretty complex.

      While all these things do suffer catastrophic failure from time to time, it is far from the norm. Defective cars get recalled. Space shuttles ALL get grounded at the mere possibility of defect.

      If Q/A as stringent as this was applied to software, Microsoft - and in fact most of the software industry - would be out of business. Can you imagine a Windows recall?

      There is software out there that does not fail. Mind-bendingly complex software of the sort that "drives mere mortals mad" to boot. It is tested and retested, through all possible situations - not just the "likely 80%" of them. It is proved correct, and then verified again.

      COTS software is crap because neither the market nor the regulatory forces (such as they are, but that's a separate discussion) do not require it to be. Nor could they.

      A 747 Jumbo costs a whole lot, and while much of that cost is in the manufacture of the "big and complex thing" that it is, a significant chunk of that cost is also due to the design process, the testing, the modeling and simulation of it.

      Software is easy to scale, everyone can have a copy of the product once one is built. Cake. But spread out the cost of an error free design - tested to exhaustion, passed through V&V and so on, and you have a completely different market landscape with which to contend.

      Consumers, in the COTS context, don't mind "planned obsolescence" in their software. The current state of things proves this. People would rather have pretty features on a flaky system, than a solid system.
      • by Surazal ( 729 ) on Tuesday May 20, 2003 @09:44PM (#6003690) Homepage Journal
        Consumers, in the COTS context, don't mind "planned obsolescence" in their software. The current state of things proves this. People would rather have pretty features on a flaky system, than a solid system.

        This is not necessarily true... it's a bad generalization besides. Most people I work with in the IT industry would give their arm, leg, spleen, right lung, part of their left lung, lower intestine, and maybe even their occipital lobes for a reliable system that WORKS. Features are secondary.

        The "features over stability" myth is just that: a myth. Show me an admin that prefers only the latest and greatest in "features" and I'll show you an admin that will lose all her/his hair within six months (a little after all their hair turns white).

        Well, ok, I work primarily with IT people admittedly. Perhaps the folks in management are a little different. But I've noticed that IT people have ways of making management's lives miserable (in ways that are downright creative) when a bad decision is made with software purchases. I've done it, myself. ;^)
      • by Chris Carollo ( 251937 ) on Tuesday May 20, 2003 @10:25PM (#6004001)
        Jets are complex too. So is the Space Shuttle. Cruise ships. CARS are pretty complex.
        Then again, if one of the overhead bin latches get stuck, or my overhead light burns out, or my seatbelt gets stuck, the entire plane or car doesn't instantly explode. The issue isn't complexity, it's fragility.

        Software is incomprehensibly fragile -- any single thing can cause a crash, taking the whole system or application down. And even those critical parts of things like airplanes have multiple redundancies, something that's hard to build into software. You can do things like catching exceptions, but you typically can't recover as gracefully as if there was never a problem at all.

        The shuttle is actually not a bad analogy -- it's also very fragile due to the stresses it endures. And we've effectively had two crashes in 100 runs. Most software is more stable than that.
      • Can you imagine a Windows recall?

        I must be able to, I'm feeling flushed and my nipples are hard.

  • Easy (Score:4, Funny)

    by PerlGuru ( 115222 ) * <michael@thegrebs.com> on Tuesday May 20, 2003 @08:57PM (#6003283) Homepage
    Same reason cars crash.... people ;-)
  • by fishbowl ( 7759 ) on Tuesday May 20, 2003 @08:59PM (#6003296)
    Crash? What crash?

    radagast% uptime
    8:56pm up 582 day(s), 12:45, 22 users, load average: 0.00, 0.00, 0.01

    • by Anonymous Coward on Tuesday May 20, 2003 @09:04PM (#6003358)
      load average: 0.00, 0.00, 0.01

      easy to keep a computer up if you never use it ;)
      • by toddestan ( 632714 ) on Tuesday May 20, 2003 @09:49PM (#6003721)
        Even with my uptime experiments, which consisted of taking an old but reliable hardware, installing Windows 95/OSR2/98/98SE/ME, and then letting the computer idle and do nothing never resulted in more than about 25 days before I came over and windows was fubar'ed or the computer was simply locked hard.

        Windows 3.1 actually did quite well if I remember right, as it seemed perfectly content sitting idle doing nothing seemly forever. Windows 9x always seemed to randomly thrash the HDD, even after a clean install, which led me to believe that Windows 9x is never truly idle, it's always up to something (virtual memory?), and that something eventually will bring it down.

        Windows 9x actually has a bug in it that would lock the computer after 46 days of uptime, but it took years to catch it because no one ever got close to that mark.
    • Crash? What crash?

      up 582 days


      Reboot? What reboot?

      Now, when was the last time you tested those init scripts? :)

      -= Stefan
      • by Dr. Photo ( 640363 ) on Tuesday May 20, 2003 @09:57PM (#6003774) Journal
        Reboot? What reboot?

        Now, when was the last time you tested those init scripts? :)


        Init scripts? You heathen!!

        Rebooting is a special occasion, signalling the coming of the harvest season, or the installation of a new kernel. Accordingly, the High Priest shall bring the system up by hand, typing in the ancient incantations from the sacred scrolls.

        Init scripts are for the weak of faith. Let ye not be tempted by the daemons of rc-dot-d!
        • by Guppy06 ( 410832 ) on Tuesday May 20, 2003 @10:58PM (#6004199)
          "Accordingly, the High Priest shall bring the system up by hand, typing in the ancient incantations from the sacred scrolls."

          Would those sacred scrolls, perchance, be small, yellow, and stuck all around the monitor screen?
    • So what Kernel is that you are running? Hmmm. If it's a linux box that would barely by 2.4. More likely 2.2.

      (Digging through my pile of vulnerabilities...)

      Say, could we get an address on that box? Muhuahahahaha

      My uptime is largely limited by kernel upgrades and the fact I cycle the power once per month to prevent the drive head from sticking.

    • by deathcow ( 455995 ) on Tuesday May 20, 2003 @10:55PM (#6004189)
      We had a Cisco router wigging out the other week. Our Network Admin decided to reset it, and it offered this up:

      Kodiak_Rtr uptime is 6 years, 9 weeks, 3 days, 10 hours, 43 minutes

      System restarted by power-on

  • by drink85cent ( 558029 ) on Tuesday May 20, 2003 @08:59PM (#6003301)
    As I've always have heard with computers you can't prove something works, you can only prove it doesn't work. As long as there are an almost astronomical number of states a computer can be in, you can never test for every possible case.
    • by innosent ( 618233 ) <jmdority.gmail@com> on Tuesday May 20, 2003 @09:54PM (#6003752)
      Not exactly. Assuming that the hardware is ok, you can prove that a system is reliable for any given finite input (including, most importantly, all possible finite substrings of inputs, however it is not possible to test all possible inputs, since a portion of those are infinite), it's just that doing so in large systems takes enormous amounts of time, and of course, time = money. Take Microsoft, for example. It takes a team years to develop a product like Windows XP, run a few test cases, and fix the major bugs. But just think how long it would take to go through every possible input substring of a given length (and by substring/string I am including non-character inputs [mouse, network, etc]).

      Consider a simple program that inputs 10 short strings of text and does some computations on those strings. Say for example that the system that has only a keyboard as input, that all input functions are guaranteed only to input A-Z (caps only), the space bar, and 0-9 (regex ((A-Z)*(0-9)*)*( )*), not to overflow, and that there are 10 inputs with exactly 10 characters for each input (spaces fill end of string). This means that there are 37 possibilities for each digit, totaling 37^100 unique possible inputs, about 6.61E156 possibilities, each 100 characters. Typing a million characters per second would take 2.094E145 years! Keep in mind that this is an extremely simple system.

      Therefore, it is not possible to test ALL input cases of any nontrivial program, only a few selected cases, which most will agree is far from proving a program correct. Instead, developers should have detailed mathematical descriptions of how a program is to behave at each incremental step, and verify that the program follows those descriptions accurately. Programs can only be proven correct in the same manner that any discrete mathematic concept can be proven correct, with one of the most common methods of a functionality proof being mathematical induction. Based on a few basic assumptions (like that the functions you call work as documented), the rest of the system can be proven by proving the trivial parts and cases first, and then constructing a complete proof based on the trivial parts.

      The problem with this is that a small change can have a big impact on the proof, and nobody actually takes the time to verify that everything still works. Companies don't often spend money on making their software 100% correct, they just need to add the nifty new features that their customers want before their competitors do. I'd be willing to bet that 90% of the bugs found in XP can be traced to a "nifty new feature" that broke code that may have been proven correct at some point.

      In other words, the short answer is yes, if you can test every state, you can prove a program correct, but since that's usually impossible, it becomes the developers' responsibility to incrementally prove the system, which is far easier if all functionality is planned ahead of time, but still too time/money consuming for most software companies to bother with. Microsoft doesn't care if your computer crashes, you'll probably still pay them, and as much as I'd like to think otherwise, OSS isn't much different (although it's usually more time than money there).
    • Debian. (Score:5, Funny)

      by twitter ( 104583 ) on Tuesday May 20, 2003 @10:14PM (#6003909) Homepage Journal
      Debian [debian.org] tested in every state [debian.org], works good everywhere. I have yet to prove that it does not work anywhere in any way. I can not say the same thing for any other software I've ever run on a PC.
  • Human Error (Score:5, Insightful)

    by Obscenity ( 661594 ) on Tuesday May 20, 2003 @09:00PM (#6003302) Homepage
    All programs (for the most part) must be written by people. People crash, they're buggy and they dont have a development team working on them. Computers crash because people cant catch that one little fatal error in 10,000 lines of code. Smaller programs are less succeptable to errors and big scary warning messages that make even the most world-hardend geek worried about his files. Yes, it's getting better with more and more people working on something at once. Mozilla (www.mozilla.org) has a feedback option to help them debug, many software companies are including this. But even with that in place, there is always that small human error, that will screw something up.
    • Re:Human Error (Score:5, Insightful)

      by Malcontent ( 40834 ) on Tuesday May 20, 2003 @09:45PM (#6003700)
      "People crash, they're buggy and they dont have a development team working on them. Computers crash because people cant catch that one little fatal error in 10,000 lines of code. "

      While this statement is true it's also a cop out. In the last twenty years there have been tremendous amount of advances in computer science and languages and yet everybody still programs in C.

      That is the reason why programs crash. Why don't people use languages that make programs more failsafe and make programmers more productive.

      It would be interesting to do a study of the "bugginess" of programs written in python, java, scheme, smalltak, lisp etc. My guess is that programs written in C crash the most.

      Where are all the programs written in scheme or smalltalk or ML?

      Use better languages and crash less.
      • Re:Human Error (Score:5, Insightful)

        by Uller-RM ( 65231 ) on Tuesday May 20, 2003 @10:15PM (#6003918) Homepage
        Java programs can still crash -- and believe me, grade homework for undergrad CS students for a few years and you'll see plenty of it. The only difference is that Java tosses an exception that isn't handled, and C either asserts and calls exit(-1) or segfaults.

        I don't think it's fair to say that any one language is "safer" than another -- once you reach a certain level of expertise, one can write a stable and robust program in C or C++ or Java or Haskell (my preference) with equal effort. The effort is mental: being persistent enough to define solid logical definitions for each part of the program, failure conditions, etc. and then execute them to the letter in the language of choice. If the program behaves logically, you can prove that it works using logical principles -- induction and so on. (And if you ever do govt contracting or any other project that calls for requirement tracability, you'll need to.)

        The difference between languages is merely the way the code is expressed. Java and C++ have exceptions; C does not. For some situations, return codes are better than exceptions, and for some situations the opposite is true. Java has robust runtime safety -- C and C++ do not. C and C++ have templated containers -- Java's just now getting such genericity. All languages and all approaches to problems have tradeoffs: the mark of a good programmer is knowing those tradeoffs and picking which is best for the situation.
        • Re:Human Error (Score:5, Insightful)

          by ojQj ( 657924 ) on Wednesday May 21, 2003 @03:35AM (#6005420)
          Disclaimer: I haven't programmed in Java since my undergrad, but I learned it before C++. I've been programming in C++ professionally for 3 years straight now, not counting internships and class assignments before that.

          I'd rather have an exception than a crash. It gives me more information about what I did wrong. A crash that's not reliably repeatable and only happens in your release version under Windows OT systems with IE 4 installed, is next to impossible to find and fix -- in C++ it's only worse.

          Not only that, but memory management is more than just a nuisance. Just yesterday, I wanted to move some code from one class to another to improve the object-oriented structure of some code which I've taken over from another developer. In that code were a couple of news, and I couldn't find the deletes which matched them. So I asked the original developer. Turns out the deletes were in a base class of the class that I was moving the code to. If I had been programming in Java, this would have been a cut and paste job finished in 30 seconds, plus 15 minutes for testing the change before checking in. In C++, it was 15 minutes trying to find the deletes myself, 15 minutes waiting for the other developer to get to a break point in his work and another 15 minutes assuring myself that the deletes really were called for all cases, and another 15 minutes for testing the change before checking in. That's a factor of 3-4 (depending on if I have something else I can do while waiting) for the C++ program.

          Memory management and other unnecessary tasks which C++ saddles the developer with do make an impact on either development time, program stability, or both. And that is also true for experienced C++ programmers.

          They also make an impact on language learning time, which is not to be underestimated with the number of newbies today, and people moving up from still worse languages like Cobol. In addition, even for an experienced C++ programmer, they make a difference in the time it takes to understand code which was programmed by another programmer.

          I agree with you that there are situations where every language, including C++, is the most appropriate for the problem in question. I just think that C++ is over-used, thus reducing the average stability of modern programs and the average productivity of modern programmers.

      • Re:Human Error (Score:5, Insightful)

        by GlassHeart ( 579618 ) on Wednesday May 21, 2003 @12:16AM (#6004629) Journal
        It would be interesting to do a study of the "bugginess" of programs written in python, java, scheme, smalltak, lisp etc. My guess is that programs written in C crash the most.

        Even that is a worthless statistic. Assuming that bad programs are written by bad programmers, the language that more bad programmers choose will appear the highest in your study as the buggiest language. Bad programmers choose the language du jour, thinking it will land them a cushy job.

        You'll have to disprove the assumption to correctly blame the language.

        Use better languages and crash less.

        Try dividing by zero in your better language of choice.

    • Re:Human Error (Score:5, Interesting)

      by JohnsonWax ( 195390 ) on Wednesday May 21, 2003 @12:00AM (#6004548)
      "All programs (for the most part) must be written by people. ... Computers crash because people cant catch that one little fatal error in 10,000 lines of code."

      All bridges (for the most part) must be built by people. Bridges collapse because people can't catch that one little fatal error in one or two million components.

      The shit coders put out there, I swear... The reason software crashes is that by-and-large it's hacked together, not engineered. You hack a bridge together, and yes, it'll fail. You engineer software, and yes, it will run reliably. It's not fun to do - no easter eggs, no cool tricks, no cramming features in weeks before ship.

      I'm stunned at the amount of code that goes out that was written by interns, by unexperienced coders, by people that just don't have a clue. The software industry really has no concept of best practices, no leadership, no authority body. The fact that buffer overflows still happen is stunning.

      It's not small projects that work well because out of dumb luck they happen to not fail, or larger projects that work okay because we have 34,000 people looking at the code. If that's 'best practices', then we're doomed.

      "Mozilla (www.mozilla.org) has a feedback option to help them debug, many software companies are including this."

      Uh huh. Let's translate that to my car: "Hi. Yeah, I'd like to report a bug. I have a Saturn Ion, version 1.1v4. Yeah, when I turn on the left turn signal and then turn on the lights, the car catches on fire. You might want to fix that in the next version. Just though you might want to know. Bye."
  • It's bugs! (Score:5, Insightful)

    by madprof ( 4723 ) on Tuesday May 20, 2003 @09:01PM (#6003317)
    Applications are getting bigger. Code is growing in size. As computing power grows so does the complexity of the code that is written. This means there is a greater chance of bugs.
    You can write in any language you like, but bugs will still get through. Lack of proper planning, non-anticipation of working conditions etc. all combine.
    If you can make all programmers perfect then you may eliminate this problem. Otherwise I'm afraid we're going to be stuck with bugs.
  • Speed (Score:5, Insightful)

    by holophrastic ( 221104 ) on Tuesday May 20, 2003 @09:01PM (#6003318)
    Why spend time testing and debugging and designing for situations which are too rare to be profitable?

    I program my applications as properly as I can. But when the client wants to save money by testing it themselves and ignoring non-fatal bugs, they save money and are happy doing it.

    So in other words, economy.
    • Re:Speed (Score:5, Insightful)

      by mooman ( 9434 ) on Tuesday May 20, 2003 @09:48PM (#6003715) Homepage
      I think this is one of the few responses to hit upon the crux of the issue.

      Most of the other comments all revolve about code and computers or perhaps even the human nature of the programmer itself but I think they all neglect the "one ring that rules them all": Management.

      I'm not quite an old fogie yet in the software world, but can at least claim to have been around (professionally) for about a dozen years and worked for about half a dozen Fortune 500 companies (plus a few startups).

      In every single case I could go on and on about corners that were cut, testing schedules that were compressed, last minute features that were added without proper design, and an alarming number of times when programmers expressed concern about the quality of some module/functionality only to have it ignored by management.

      It's soured me so much that I'm actually considering ditching the IT world outright. In a land where deadlines control everything, you are going to have stability or quality issues - I guarantee it.

      Now, I've a known a few apathetic programmers that I personally wouldn't trust with a pencil, much less the code that my company is selling, but by and large, I think programmers all have the potential to make almost entirely stable code, or at least as stable as the tools/libraries that they have to work with. But this relies on a rather more liberal amount of testing than most companies seem willing to invest in.

      Most companies seem to use a sort of 80/20 rule. If they get 80% of the bugs out, that's good enough.. They can get their product to market and ship patches later. Having a more robust application just isn't as strong of a selling point, and even harder to prove? How do you show, objectively, that your app is more stable than your competition's? Most companies would flinch from even opening this can of worms because then you have to fess up to a certain non-zero percent failure rate on your own product.

      This whole issue touches on the "featuritis" problem as a whole. In order to maintain a revenue stream, software companies have to do one of two things:
      1) Keep making new versions of your app(s) *and* convince everyone they need the newest one [often when they really don't] -or-
      2) Try to make revenue through support contracts.
      The latter mostly goes away if your code is as good as you claim it is, making it a tough sell, or your customer base just doesn't have the budget for an ongoing contract where they really aren't getting anything new over time but still have to pay for it!

      Thus, the push for "features", and with the push for features you have deadlines, and with deadlines you have management doing whatever it takes to meet them, even if those choices are detrimental to the product.

      Ugh. This is why I hold on to the hope that open source apps, frequently hobbyist in nature, will continue to gain strength. In general, the contributors to those projects are chasing a vision and not market share. These guys and gals seem more willing to keep plugging away at something before they call it "done" and foist it on users. Of course, most of these projects are never "done" by their own admission, and will spend their lives in perpetual development. But at least I know that some of the effort is toward bug fixing and not just new features.

      But when Microsoft could have tried to make Word 95 better and more stable, they instead came out with 97.. and instead of making it more stable, they decided the market needed Word 2000, and then... Yawn. You get my drift. I would have been happy with a bulletproof Word 97 myself...

      [disclaimer: In the above I've made an outright embarassing number of generalizations and I hope they are identified as such and that I don't get slammed with a litany of examples supposedly to the contrary. Thank you.]
      • Re:Speed (Score:4, Interesting)

        by mackstann ( 586043 ) on Tuesday May 20, 2003 @10:16PM (#6003921) Homepage

        Well said, I would have to agree with the majority of your post. The only thing I have an issue with is:

        I'm not quite an old fogie yet in the software world, but can at least claim to have been around (professionally) for about a dozen years and worked for about half a dozen Fortune 500 companies (plus a few startups).


        [..]

        It's soured me so much that I'm actually considering ditching the IT world outright.

        It took you over a decade? I've been working in "light" IT for about 4 months, and I already have come to this unfortunate conclusion. Writing commercial software just isn't fun, not only do you have to write software that you may not find all that interesting, but you also are denied the opportunity to use your skills to the fullest and create something that you are truly proud of. Corners are cut, and in the end, you realize, that it's just a "product", or an in-house "app", it only needs to work "good enough", nevermind if the code needs cleaning up or whatever other issues there are (they don't (seem to) exist if you're not staring at the code!).

  • by ziggy_zero ( 462010 ) on Tuesday May 20, 2003 @09:01PM (#6003320)
    ...I remember my teacher saying "Computers do exactly what they're told, not necessarily what you want them to do."

    I think the root of the problem is time. Microsoft doesn't have the time to spend going through every possible software scenario and interaction, or every possible hardware configuration. If they did do that, it would probably take a decade to pump out an operating system, and by that time hardware's changed, and it's a neverending cycle.....

    We just have to accept the fact that the freedom of using the hardware components we want and the software we want, all made by different people, will result in unexpected errors. I, for one, have come to grips with it.

    • by nick_davison ( 217681 ) on Tuesday May 20, 2003 @09:43PM (#6003683)
      ...I remember my teacher saying "Computers do exactly what they're told, not necessarily what you want them to do."

      D&D summed it up for me, years ago, with the wish spell: At its purest, it's too powerful to give to players - they'll unbalance and destroy the game. However, it can be balanced by giving them exactly what they ask for.

      "A demon lord approaches you out of the shadows."
      "I cast 'wish' - I wish for a +100 sword of almighty vorpal type slayingness."
      "The sword appears in the demon's hand. He thanks you for it, then hits you."

      Writing good code is like making a good wish. All you can do is try to cover as many eventualities as possible. The problem is, code gets really slow to run and even slower to write when you have to add out of bounds checks on every argument, error handling and reporting, garbage collection and all the rest. Even then, there'll always be some twisted scenario that you didn't know could exist so didn't plan for. So most people just give up, wish for the damn sword and hope the PC/Dungeon Master doesn't have too evil an imagination this time.
    • Time is Money. (Score:5, Interesting)

      by Rimbo ( 139781 ) <rimbosity@sbcglo[ ].net ['bal' in gap]> on Tuesday May 20, 2003 @09:53PM (#6003744) Homepage Journal
      I think this is basically the right answer.

      A couple of months ago, the company I worked for spent a lot of time and effort developing a robust testing methodology. We had a software product that through blood sweat and tears would not crash unless you basically blasted the hardware in some way.

      But that led to two problems. First, we only had so many people working, and resources spent testing and bugfixing were not being used to add new features. Second, the time it took to get it that robust delayed the product's release beyond the point where we could recover the investment. [Time developing] * [Cost of operating] was greater than [expected number of units sold] * [price per unit].

      What ended up happening was that we lacked the features to justify the price and number of units we needed to sell to cover the cost of developing it. We had no bugs -- and we could be certain of it -- that would crash the machine.

      As of last month, the company could no longer afford to pay me. I'm not there any more.

      The moral of the story is that trying to make a bug-free product will bankrupt your company, especially a startup. Software tools have improved, but the benefit largely goes towards adding new whiz-bang features that sell the product for more money, not to being able to fix more bugs.

      What we should do as engineers and managers of software products is to not be afraid of getting the product out the door with a few bugs in it if we want our company to do well; this business reality is ultimately why bugs will a big part of software for the forseeable future.
  • by null-sRc ( 593143 ) on Tuesday May 20, 2003 @09:01PM (#6003324)
    *0;

    never follow the null pointer they said... what are they hiding there????
  • by woodhouse ( 625329 ) on Tuesday May 20, 2003 @09:02PM (#6003327) Homepage
    Because reliability is inversely proportional to complexity. Systems these days are generally a lot more complex than those of 10 years ago, and in complex systems, bugs are much harder to find. The fact that you say stability hasn't changed is in fact a pretty impressive achievement if you consider how much more complex hardware and software is nowadays.
  • by Jeremi ( 14640 ) on Tuesday May 20, 2003 @09:02PM (#6003331) Homepage
    It's the need for new features. Every feature that gets added to a piece of software is a chance for a bug to creep in.


    Worse, as the number of features (and hence the amount of code and number of possible execution paths) increases, the ability of the programmer(s) to completely understand how the code works decreases -- so the chances of bugs being introduced doesn't just rise with each feature, it accelerates.


    The moral is: You can have a powerful system, a bug-free system, or an on-time system -- pick any two (at best).

  • by MightyTribble ( 126109 ) on Tuesday May 20, 2003 @09:03PM (#6003335)
    Some crashes aren't the fault of the OS. Bad RAM, flaky disk controllers, CPU with floating-point errors (Intel, I'm looking at *you*. Again. *cough* Itanium *cough*)... all can take down an OS desite flawless code.

    That said, some Enterprise-class *NIX (I'm specifically thinking of Solaris, but maybe AIX does this, too) can work around pretty much any hardware failure, given enough hardware to work with and attentive maintainence.
  • crashes? (Score:4, Interesting)

    by Moridineas ( 213502 ) on Tuesday May 20, 2003 @09:03PM (#6003338) Journal
    Well the computers that I manage we've got an OpenBSD server hat never crashes (uptime max is around 6months--when a new release comes out) and a FreeBSD server that has never crashed--max up time has been around 140-150 days, and that was for system upgrades/hardware additions.

    On the workstation side they are definitely not THAT stable, but since we've switched to XP/2K on the PC side, those pc's regularly get 60+ days of uptime. Just as a note--I had a XP computer the other day that would crash about two or three times a day. The guy that was using it kept yelling about microsoft, etc etc etc. Turned out to be bad ram. After switching in new ram it's currently at 40 days uptime (not a single crash).

    For some reason the macs we have get turned off every night so their uptime isn't an issue, but from what I hear OSX is quite stable.
  • Touchy subject (Score:5, Interesting)

    by aarondyck ( 415387 ) <(aaron) (at) (ufie.org)> on Tuesday May 20, 2003 @09:04PM (#6003346) Homepage Journal
    I remmeber years ago having a conversation with an IT manager at IBM. We were talking about the inability of computer programmers to make their code foolproof. His point was that we don't see problems like this with proprietary hardware. When was the last time someone crashed their Super Nintendo? Of course, with a PC platform (or even Mac, or whatever else) there are problems of unreliability. His idea is that this is because of sloppy programming. The reason we were having this conversation is that I had a piece of software (brand new, I might add) that would not install on my computer. You would think that a reputable software company (and this [sierra.com] was a reputable company) would test their product on at least a few systems to make sure that it would at least install! The end result was that I ended up never playing the game (not even to this day), nor have I purchased another title from that company since that time. Perhaps that is the solution to the root problem?
  • by Hanji ( 626246 ) on Tuesday May 20, 2003 @09:04PM (#6003347)
    Scientific American actually had an article [sciam.com] on a similar topic. Basically, they seem to be accepting crashes as ineveitable, and were focusing on systems to help computers recover from crashes faster and more reliably...

    They also propose that all computer systems should have an "undo" feature built in to allow harmful changes (either due to mistakes or malice) to be easily undone...
  • It's expected. (Score:4, Insightful)

    by echucker ( 570962 ) on Tuesday May 20, 2003 @09:05PM (#6003361) Homepage
    We've lived with bugs for so long, they're a fact of life. They're accepted as part of the daily dealings with computers.
  • by T5 ( 308759 ) on Tuesday May 20, 2003 @09:05PM (#6003364)
    It's all about the bits. There are just so many more of them now, and a great deal more pressure in the marketplace to bring ever newer software and hardware to market. Back in the day of the IBM 360 and the VAX, even though we were mesmerized by the capabilities of these machines, they were years and years in the making, debugged much more thoroughly than we can hope for today, and much, much simpler.

    And let's not forget that this was the exclusive realm of the highly trained engineer, not some wannabe type that pervades the current service market. These guys knew these machines inside and out.
    • by twitter ( 104583 ) on Tuesday May 20, 2003 @09:54PM (#6003755) Homepage Journal
      this was the exclusive realm of the highly trained engineer, not some wannabe type that pervades the current service market.

      Let's hear it for the "wannabes". I'm not a highly trained engineer by a long shot, but I've got computers that don't go down except for power outages. Then they come right back up. As ERS is so fond of pointing out, complexity kills traditional software. Cosed source can't keep up.

      Free software has the answer. Debian [debian.org] has 8,710 packages available to do anything a comercial comercial software does, mostly better. Not just one or two pieces of it, every piece. My systems never crash under their stable release and I run all sorts of services. How is this? It's easy. Free code get's used, fixed, improved and reviewed all the time. The pace of improvement is astounding. I could go on and on about things free software does that common comercial code does not. Code that never sees the light of day is dead.

  • by Zach Garner ( 74342 ) on Tuesday May 20, 2003 @09:05PM (#6003366)
    Read "No Silver Bullet: Essence and Accident of Software Engineering" by Brooks. A copy can be found here [berkeley.edu].

    Software is extremely complex. Developed to handle all possible states is an enormous task. That, combined with market forces for commercial software and constraints on developer time and interest for free software, causes buggy, unreliable software.
  • Microsoft (Score:5, Insightful)

    by eht ( 8912 ) on Tuesday May 20, 2003 @09:08PM (#6003388)
    Microsoft has made an extremely stable OS, it's called Windows 2000, as long as you use MS certified drivers the OS should never crash, individual programs may crash under Windows, but you can hardly blame Microsoft for that. I have had Windows machines with months of uptimes and no problems, went down 8 days ago due to power failure too long for my UPS's to handle, which also took down my FreeBSD machines, uptime is matched for all of them, and will one day again be measured in months.

    Yes I should probably patch some of my Windows machines, but I have my network configured in such a way that for the most part I don't need to worry and you don't have to worry about my network spewing forth slammer or other nasty junk.
    • Re:Microsoft (Score:5, Interesting)

      by VTS ( 673706 ) on Tuesday May 20, 2003 @09:20PM (#6003520)
      Some time ago I would have agreed with you, but not anymore, If media player crashes playing some video then the whole system becomes unstable and then even doing something like sending a file to the recyclebin freezes the UI...
    • Re:Microsoft (Score:4, Interesting)

      by CognitivelyDistorted ( 669160 ) on Tuesday May 20, 2003 @10:47PM (#6004140)
      Yes, NT5+ is very stable. MS is working on the driver problem. SLAM [microsoft.com] is a tool for verifying drivers. Given a requirement, e.g., after acquiring a kernel lock the driver must release it exactly once on all control paths, and some driver source code, SLAM can find all the ways the driver can fail the requirement. They have specifications for various driver types and are using them to test some drivers. It's a research project by the Software Development Tools group in MSR, but they're working on getting it stable and powerful enough to verify more drivers. If they can get it to work well enough, they'll supply it to hardware vendors.
    • Nope! Case in point. (Score:4, Informative)

      by fireboy1919 ( 257783 ) <rustypNO@SPAMfreeshell.org> on Tuesday May 20, 2003 @11:35PM (#6004401) Homepage Journal
      I have a Microsoft reference driver for my soundcard (i.e. Microsoft made the driver and approved it themselves). I use it on my computer.

      Unfortunately, two things cause it to fail.
      1) It doesn't play nice with other drivers on the same IRQ.
      2) Microsoft's advanced power management driver assigns it to the same IRQ as my USB port and my network card, and that can't be changed without a reinstall of Windows.

      So basically, what happens is that the sound card will eventually crap out completely and never work again (until reboot) if it attempts to work at the same time either of the other two devices on that IRQ are working.

      Keep in mind:
      1) Microsoft knows about this bug
      2) It causes system instability for lots of drivers - even certified ones

      I should also mention that there is nowhere that this bug is reported by the OS; I had to find it through trial, error, and lots of research. Win2K is not as stable as you think
  • Economics? (Score:5, Insightful)

    by iso ( 87585 ) <.slash. .at. .warpzero.info.> on Tuesday May 20, 2003 @09:09PM (#6003395) Homepage
    While it's not the whole story, something definitely has to be said about the fact that while people are willing to pay for features, they're rarely willing to pay more for stability. Quite frankly there's little economic incentive to make software that doesn't crash.

    If your market will put up with the ocassional crash, and never expects software to be bulletproof, why bother putting the effort into stability? Until people start putting their money into the more stable platforms, that's not going to change.
  • by dsanfte ( 443781 ) on Tuesday May 20, 2003 @09:09PM (#6003401) Journal
    The ultimate solution to the problem is to let computers write the software themselves. Give them a goal, set up evolutionary and genetic algorithms, and let them go at it on a supercomputer cluster for a few months.

    Of course, you'd need to make sure the algorithms that humans wrote aren't flawed themselves, but once you got that pinned down, you would be more or less home-free.

    Even if you didn't take this drastic a step, another solution would be computer-aided software burn-in. Let the computer test the software for bugs. A super-QA Analysis if you will. Log complete program traces for every trial run, and let the machine put the software through every input/output possiblity.

    • by Jeremi ( 14640 ) on Tuesday May 20, 2003 @09:24PM (#6003545) Homepage
      The ultimate solution to the problem is to let computers write the software themselves. Give them a goal, set up evolutionary and genetic algorithms, and let them go at it on a supercomputer cluster for a few months.


      That only works if you can write a fiteness algorithm that can tell whether the program did the correct thing or not -- otherwise, you have no way to decide what to "breed" and what to throw away. And for many types of program, that fitness algorithm would be more difficult to write than the program you are trying to auto-generate...


      Of course, you'd need to make sure the algorithms that humans wrote aren't flawed themselves, but once you got that pinned down, you would be more or less home-free.


      All you've done is replace a hard problem ("write a program that does X") with a harder problem ("write a program that teaches a computer to write a program that does X"). No dice.


      Even if you didn't take this drastic a step, another solution would be computer-aided software burn-in. Let the computer test the software for bugs. A super-QA Analysis if you will. Log complete program traces for every trial run, and let the machine put the software through every input/output possiblity.


      For most modern programs, there isn't nearly enough time left before the heat-death of the universe to do this. Hell, for programs other than simple batch-processors, the number of possible input and outputs is infinite (since the program can do an arbitrary number of actions before the user quits it)

  • by PseudononymousCoward ( 592417 ) on Tuesday May 20, 2003 @09:14PM (#6003445)
    The number of bugs is smaller. Think of the systems used by the telcos, or NASA. Are they perfect? No, but they are much, much more stable than Win32, or Mac, or Linux. The reason is simple, the owners demand them to be.

    There are costs associated with fixing bugs and reducing crashes. The more stable an operating system is to be, the more time and money that must be devoted to its design and implementation. PC users are not willing to pay this amount for stability, either in explicit cost, or in hardware restrictions or in trade-offs for other features.

    As Linux evolves over time, its stability will always improve, but it may still never reach the stability of, say, VMS. Why? Because even with the open source model of development, there are still tradeoffs to be made, tradeoffs between new features and stability, mostly. And successive bugs are harder and harder to fix, requiring greater and greater amounts of time. At some point, the community/individual decides that they would rather spend their time going after some lower-hanging fruit.

    Just my $0.02

    Actually, IAAE.
    • by dghcasp ( 459766 ) on Tuesday May 20, 2003 @10:22PM (#6003973)
      Think of the systems used by the telcos, or NASA. Are they perfect? No, but they are much, much more stable than Win32, or Mac, or Linux. The reason is simple, the owners demand them to be.

      This reminds me of a story I read in the internal magazine of a telecomunications equipment supplier that I used to work for. It was about an international toll switch somewhere in the U.K. that had been up for 17 years (or something extreme like that.) Furthermore, this included having all of its hardware upgraded and replaced. Twice.

      Just stop and think about that for a while in PC terms... "I replaced my motherboard with the power on without rebooting my system, while it was serving 10,000 web pages a second."

      Granted, this is a higher level of hardware with full redundancy, but it still boggles my mind.

    • Actually (Score:4, Insightful)

      by Sycraft-fu ( 314770 ) on Tuesday May 20, 2003 @10:50PM (#6004164)
      One of the biggest barriers to stability for something like Linux (or Windows) is the fact that it must accomadate new software and hardware configurations all teh time. If you take a Lucent 7R/E phone switch it will run on a given hardware (the 7R/E) hardware. IT will run Lucent's OS, it will do only what it was designed to (switch phone circuts). There is no putting new hardware in it, less it be Lucent approved, there is no loading of new apps to make it do things, less it be Lucent approved, and so on.

      IF you want an open OS that will run with hardware by whoever happens to want to make it and software by whoever hapens to want to write it, you cannot have a verified design that is 100% reliable. Unforseen interactions WILL happen and crashes or other malfuncations will result.
  • by hawkstone ( 233083 ) on Tuesday May 20, 2003 @09:15PM (#6003454)
    I'm sure it's harder to accomplish this for kernel level code (it's primarily OSes being pointed at right here) but you can think everything is working hunkey-dorey and not realize something is going wrong under the covers.

    Most errors of this can be found with testing under tools like valgrind [kde.org] or Rational's purify [rational.com]. I'm sure there are others (I've heard of ParaSoft Insure++, ATOM Third Degree, CodeGaurd, and ZeroFault), but the quality of these tools really matters.

    The issue is that tiny errors can cause crashes intermittently, and not immediately. For example:
    uninitialized memory reads -- usually not a problem, but if this value is ever actually used, it will be.
    array bounds reads -- never acceptable, but depending on the structure of memory, may not always cause an immediate crash.
    array bounds writes -- like ABRs, may not be immediately fatal, but these are going to crash your code sooner or later.

    Since they don't always cause an immediate crash, these errors are likely to creep in to released code without use of one of these tools. And if you want to know why we shouldn't always run programs in an environment that checks these kinds of things, try it once; you'll notice a speed hit of usually an order of magnitude. C/C++ is a perfectly acceptable language -- not all debugging has to be done by the compiler/interpreter or only after you notice a problem.

    Anyway, hope that wasn't too pedantic....
  • Obligatory anti-MS (Score:5, Insightful)

    by cptgrudge ( 177113 ) on Tuesday May 20, 2003 @09:18PM (#6003500) Journal
    Of course, there's no need to mention Microsoft's inability to create a stable system.

    What exactly is the purpose behind this? Why was it put in here? People are going to need to grow up if people in "our" circle want to be taken seriously. I've used Windows 2000 and Windows XP both. They crash as much as my Red Hat and Debian boxes do. Never. They are all rock solid.

    I work for a public school system. We have a class at the High School that teaches and certifies for A+ (I know, I know). They have all sorts of problems getting stuff to work and to get a system stable. In Windows and Linux.

    It isn't because they are high schoolers.

    It isn't because they are "just learning".

    It's because they buy really shitty hardware. They look for the best cost, and they get their hardware from some loser manufacturer that has fucked up drivers and horrible quality control.

    Properly maintained boxes with quality hardware in them just don't crash anymore. Programs maybe, but not systems.

    Christ, people, this has been beat to death! Microsoft has a great product for an OS now! Get back to making something better than them instead trying to convince yourself that Microsoft is delusional.

    Mod me Flamebait, I don't care.

  • by Dr. Bent ( 533421 ) <ben&int,com> on Tuesday May 20, 2003 @09:26PM (#6003564) Homepage
    Back in the Middle Ages, when the Catholic Church wanted a Cathedral built, they would pay a bunch of Freemasons to do it. The Freemasons viewed themselves as creative artisans, and they closely guarded the secrets they used to construct these impressive houses of worship.

    The method they used, however, was less than impressive. Typically, they would start with a general design, and piece together stone and mortar until something collapsed, which happened quite [thinkquest.org] often [heritage.me.uk]. Then they would patch the section that collapsed and keep on going until something else fell down, or they finished. Given the level of understanding with regards to Physics and Material Science, those Freemasons has no other choice than to build them this way.

    Now fast forward to the 21st century. The engineering disasters on par with those medieval collapses can be counted on one hand (Tacoma Narrows Bridge and the Hyatt Regency walkway collapse are the only two I can think of). This is directly due to the fact that a civil engineer can determine if a design is structurally sound before they build it.

    Contrast this with modern day software development. We can't even tell if a system is flawed after we build it, let alone before. So software gets written, deployed, and put into the marketplace that has no assurances whatsoever of actually doing what it's supposed to do (hence the 10,000 page EULA).

    You can't have Civil Engineers until you have Physics. And you won't have 100% bulletproof software until you have Software Engineering. And you won't have that until someone can figure out a way to prove that a given peice of software will perform as it's supposed to. JUnit [junit.org] is a step in the right direction, but there's still a long way to go. It's going to take a breakthrough on the order of Newton to make Software Engineering as reliable a discipline as Civil Engineering.
    • by Minna Kirai ( 624281 ) on Tuesday May 20, 2003 @11:23PM (#6004337)
      It's going to take a breakthrough on the order of Newton to make Software Engineering as reliable a discipline as Civil Engineering.

      The reliablity of today's Civil Engineering comes not from deep theoretical understanding ala Newton- it's really just the same "build, crash, repeat" method those Freemasons have been using for 1000 years.

      Now that we've had centuries of experience at building similar kinds of structures, most of the kinks have been worked out. Those rare CivEng projects that break new ground still have a high risk of unexpected failures. (A 4000% cost overrun is a failure [boston.com])

      Civil Engineering still uses empirical testing to decide if a new technique is reliable, as does "Software Engineering". You just notice it more in SE because that field has more opportunities for innovation and much, much fewer penalties when an experiment fails.

      JUnit is a step in the right direction, but there's still a long way to go.

      JUnit is a step down a curving road to a dead-end. It won't take us to an ultimate solution (but it will provide benefit in the near-term future). That's because it's not a system to help formally prove code is correct (which some unpopular languages support to small degrees)- instead, Unit Testing is just a way to automate "build, crash, repeat" empirical testing.
    • Re:Try the UML (Score:4, Interesting)

      by Billly Gates ( 198444 ) on Wednesday May 21, 2003 @12:18AM (#6004645) Journal
      Architects and engineers use extremely detailed drawings. Have you ever taken any drafting courses in Highschool or College? Every piece and even the size of every screw is accurately detailed as possible. It takes forever to get anything done because the precsion is more important. It drives some people like myself crazy.

      The blueprint is the actual prototype of the product being designed.

      The problem is if you document every step and algorthim in exact detail you will spend weeks, months, and yes years without a single line of code!

      This is unacceptable in today's bussiness world where all the projects are due yesterday and your bosses demand percentage wise how much of the code is being developed. If you spend a month planning and not a single line of code is developered your canned.

      My father took over a project where a clueless IT manager got because she slept with the CIO. Anyway she went to a seminar which talked about over flowcharting everything would be the wave of the future. She then had all the programers draft every single algorithm to the very if statements themselves on paper. After 4 months and not a single line of code my old man took over. From there he finished the project within 3 weeks!

      My point is that drafting programs is too time consuming. In a way your drawing is the program and changes can be made as you go. Its essential to have good flowcharts and notes but they need to be generalized. If there is an error in it you can delete the line and fix it. In engineering you would have to dissamble the actual product and redesign it. Because they would cost time and money it is not accepted. In software that limitation is not there or as sevre.

      UML tries to be the blueprint of all software programs but instead is only used to explain certain subsystems and algorithms. Mostly flowcharts are used so all the developers have a sense on how the program will work and how to invoke different pieces of the program.

      I do not think this going to change unless there is a quick and easy way to debug UML charts. Logic errors are killer and if its perfect I suppose you can compile the uml directly into the language of choice.

      Hmmm infact this might be the way to do it in the future.

  • by Christopher Thomas ( 11717 ) on Tuesday May 20, 2003 @09:31PM (#6003611)
    There are several reasons why software keeps crashing, and they aren't going away any time soon. These reasons are:

    • You can't prove that most software works.

      Except for a restricted set of cases, you can't prove that a given piece of code works or doesn't work. A truly exhaustive set of tests would be impractical to perform, and formal proofs of correctness place strong limits on the type of code you can write and the environment in which you can write it.

      The result is that code is assumed correct when no bugs are found. This only means that there probably aren't _many_ bugs left. Thus, it may still crash (or have a security hole, or what-have-you).

    • Software is very complex.

      Software has been complex for a long time. It just tends to be bigger now. A larger system has more opportunities for unexpected high-level interactions between components, but even a smaller system will have enough twists and turns that formulating a really good test suite, or checking the code by inspection, is very difficult. Bugs will be missed. As was discussed above, many of these missed bugs will slip through testing and reach the world.

      • Nobody wants to pay for perfect software.

        As more effort is applied, you can get asymptotically closer to a bug-free system. However, this is far past the point of diminishing returns on the cost/benefit curve. For sufficiently constrained systems, you can even try proving it correct, but this tends to lead to cutting out a lot of functionality, speed, or both.

        In situations where reliability must be had at any cost - aerospace control systems, vehicle control systems, medical equipment - the money will exist to produce near-perfect code, but even then there are bugs that occasionally bite. With commercial software, the buyer would rather have an application that crashes now and then than an application that costs ten times as much and comes out several years later.


      Free and/or open software avoids some of this by staying in development longer, which allows more of the bugs to be caught, but even free and/or open software evolves. Every change brings new bugs to be squashed. As long as there are new types of software that we want, it isn't going to end.
  • STFP (Score:5, Insightful)

    by rice_burners_suck ( 243660 ) on Tuesday May 20, 2003 @09:36PM (#6003651)

    Software crashes because: Software is an immature field. Good software takes time. Software is unobvious to business managers who want the job done yesterday.

    Businessmen generally do not understand the internal workings of software. They are in a "big-picture" sort of world where software is but one pesky detail that will be taken care of. A computer crash that causes so many thousands of dollars in damages is no different than a truck crash. There is simply a risk to every element of business. If the risk is relatively low, the big shots don't care about it. Grocery stores in earthquake prone areas continue to place glass jars on the edges of shelves. Sure, there will be an earthquake one day, but it's a calculated part of business risk, and the risk is relatively low (the Earth doesn't shake every five minutes).

    Software bugs are a similar risk. It needs to look like it works. It needs to crash (and lose data) infrequently enough that the software will still sell. The business is not concerned with stamping out software bugs. It is concerned with releasing the software and making money. If the need arises, the business will improve the software and make more money. More often than not, this means adding features and shiny graphics. Fixing bugs is not very important to companies because customers do not pay for bug fixes. By the consumer, bugs are viewed as defects and their fixes should be free. By the company, bugs are viewed as a minor risk and fixing them would cost too much to justify. So you'll reboot once in a while or lose an hour's work once in a while. If it fries your hard disk, well, you should have backed up your data.

    Software is also one of the newest fields of human endeavor. Buildings have been built, ships have sailed and farms were farmed, all for thousands of years. No matter how much progress happens in these fields now, they have come so close to "perfection" that continued improvement serves to lower cost, improve safety and increase convenience. It's not a matter of, "Gee, how can we make buildings that actually stand without falling down three times a week?" It's just a matter of, "How wide, how deep, how tall and what color glass do you want on the outside?" You pay X dollars, wait Y months and voila, there is a building. But programming has been around for how long, 50 years? It's an increasingly important but very immature field.

    Buildings, bridges, ships... they're obvious. Everyone knows that if enough lifeboats aren't put on an unsinkable ship, it'll sink on purpose, just to piss you off. Everyone knows that if a 100 story building is going to stand, it has to take 10 years to build it. Everyone knows that a dam has to be pretty damn strong or it'll break and flood half the countryside. The building, shipyard and dam businesses aren't progressing at light speed. It is easy to justify 10 years for an outrageous building design because people KNOW what is involved. But software... Now that's totally unobvious. Software is an idea. It's abstract. It's a bunch of curse words that look like gobbledygook to the uninitiated. A bunch of "noise" characters on a broken terminal. Something done by a bunch of skinny, pimply faced geeks who got beat up in high school, took the ugly girl to prom and didn't have any friends. Why should a manager bother to care that fst_jejcl_reduce() causes a possible NULL pointer in the outer loop if case 32 is activated, which happens if the previous re-sort encountered two items with similar Amount fields, all of which will take a whole day to find and fix and will only happen, say, 2% of the times this particular feature is invoked by the user, which isn't that often? Why should anybody justify spending 2 years to develop some bulletproof program that can be banged out in 3 months, with bugs? What's the problem? Constructor workers are risking their lives, moving heavy things, sweating all day in the hot sun... While geeks are sitting in offices just punching crap on a keyboard. How difficult could it possibly be? To

  • Turing showed this (Score:4, Interesting)

    by martin-boundary ( 547041 ) on Tuesday May 20, 2003 @09:41PM (#6003671)
    A crashed computer is a computer that's stopped. Alan Turing proved in 1936 that the halting problem is unsolvable. So, it's impossible to know when and how a computer is going to crash or not under all possible circumstances (inputs).

    Accept it. It's a fact of nature.

  • by dirk ( 87083 ) <dirk@one.net> on Tuesday May 20, 2003 @09:44PM (#6003696) Homepage
    When can we finally give up the FUD of "MS crashes all the time"? Anyone who has used a later MS OS (Win2k or XP) can easily see they crash very rarely. I have had my Redhat install have more problems than my Windows install in the past 6 months, and on the MS system most of the problems have been 3rd party software while on the Linux most of the problems have been the OS itself. The reason systems crash is that there are many pieces, written by many different people, interacting with each other. This is the same whether the OS is Linux of Windows. The harping on the instability of Windows does nothing but hurt the Linux cause, since anyone who actually uses a newer version of Windows knows that the person has no basis in reality.
  • by dschuetz ( 10924 ) * <.gro.tensad. .ta. .divad.> on Tuesday May 20, 2003 @09:51PM (#6003735)
    Face it -- if our cars broke down as frequently as Windows (or Linux or whatever), we'd be suing the auto industry out of business.

    If our VCRs ate every tenth tape and only played tapes from the same manufacturer as the VCR with any quality, they'd all be returned to Circuit City.

    But for software, we grit our teeth and say, well, I just don't understand computers, and reach for the power switch.

    Until we, as consumers, start fighting for software that works without crashing, we'll continue to get the lowest possible quality -- just as we have for years. Once the customer starts demanding a quality product, the quality (and whatever software development practices, languages, testing procedures, etc., are needed) will follow.

    Bottom line -- there's no real incentive. Microsoft makes billions with buggy software, the increase in profit for selling non-buggy software is pretty small.

  • by Frobnicator ( 565869 ) on Tuesday May 20, 2003 @09:56PM (#6003769) Journal
    That beyond all the hyperbole and other reasons, there is something that could be done but usually isn't.

    In C++, which a great deal of software is written in, an exception block [or the language or system equivalent] placed around the entire application will catch just about any recoverable error. This is how most of the windows blue-screens or 'your application has performed an illigal operation and will be terminated' messages are brought up. This is how Linux and other unixes generates a core dump.

    The actual handling may be in a signal handler, try/catch block, or abend, but the functionality is present in every activly developed language I have ever worked with from cobol and fortran to c, c++, java, and object pascal.

    The main reason for applications actually crashing is programmer lazyness.

    The main reason for applications getting into a state that they can crash is improper complexity management.

    When it comes to drivers, I'm much more forgiving, since it is quite difficult to manage both the hardware and software, and the communication between different programs.

    Finally, the operating system itself, which is the layer between the drivers and the applications, I haven't seen any in the last 5 years that has been unstable. Even Windows ME, for all its faults, was very stable in the actual 'operating system'.

    But that's just my 2 pesos.

    frob

  • by Salamander ( 33735 ) <jeff AT pl DOT atyp DOT us> on Tuesday May 20, 2003 @11:24PM (#6004342) Homepage Journal

    A lot of people are answering the question of why there are bugs at all, and it's an important question, but I'd like to take a different angle and consider why there are so many visible bugs. Why does a bug in a driver, or even an application, bring down a whole system? In addition to reducing the incidence of actual bugs, IMO, we should also do a better job of containing the bugs that will inevitably exist even if we all use the latest whiz-bang code analysis tools (which rarely work for kernel code anyway). Some of the semi-informed members of the audience are probably thinking that's the job of the operating system; I'd argue that our entire current notion of operating systems is flawed. There are way too many components in a typical computer system that "trust each other with their lives" in the sense that if one dies all die. Memory protection between user processes is great, but there should be memory protection between kernel entities, and other kinds of protection, as well. One of the basic services that operating systems need to provide going forward is greater fault isolation and graceful instead of catastrophic degradation.

    The Recovery Oriented Computing [berkeley.edu] project at Berkeley has gotten some press recently for trying to address this issue. Many here on Slashdot don't seem to "get it" because they've never worked on systems in which a component failure was survivable; they don't realize that rebooting a single component - perhaps even preemptively - is better than having the whole system crash. "Software rot" is a real problem, no matter how hard we try to wish it away. ROC isn't about saying bugs are OK; it's about saying that bugs happen even though they're not OK, and let's do the best we can about that. Another project in the same space, with more of a hardware/security orientation, is Self Securing Devices [cmu.edu] at CMU. There, the idea is to find ways that parts of a system can work together without having to share each others' fate. While the focus of the work is on security, it shouldn't be hard to see how much of the same technology could be applied to protect a system from outright failure as well as compromise. There are plenty of other projects out there trying to address this problem, but those are two with which I happen to have personal experience.

    The key idea in all cases is that current OS design forces us to put all of our eggs in one basket, and that's really not necessary. Designing fault-resilient systems is tough - few know that better than I do - but that's only a reason why we should do it once instead of devising ad-hoc clustering solutions for each specific application. Lots of people use various forms of clustering as a way to achieve fault containment and survive failures, but the solutions tend to be very ad-hoc and application-specific. Do you think Google's solution works for anything but Google, or that a database transaction monitor is useful for anything that's not a database? Fault containment needs to be a fundamental part of the OS, not something we layer on top of it.

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...