Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming Software IT Technology

Great Computer Science Papers? 410

slevin writes "Recently I listened to a talk by Alan Kay who mentioned that many 'new' software ideas had already been discovered decades earlier by computer scientists - but 'nobody reads these great papers anymore.' Over the years I have had the opportunity to read some really great and thought-provoking academic papers in Computer Science and would like to read more, but there are just too many to sort through. I'm wondering what great or seminal papers others have encountered. Since Google has no answers, perhaps we can come up with a list for the rest of the world?"
This discussion has been archived. No new comments can be posted.

Great Computer Science Papers?

Comments Filter:
  • Nay, archetypal... (Score:5, Interesting)

    by Empiric ( 675968 ) * on Sunday November 16, 2003 @08:41AM (#7486599)
    For "great and seminal" it's hard to beat Alan Turing's 1950 (!) paper on AI [loebner.net].
    • by Space cowboy ( 13680 ) on Sunday November 16, 2003 @09:02AM (#7486651) Journal
      Alan Turing was a genius, pure and simple.

      His crypto work during the war was massively significant in winning the battle of the Atlantic, his ideas on programming, AI, neural networks, and the more-public "turing test" were breathtaking and groundbreaking. Less well known is his theory of non-linear biology, and some exceptional papers in physics. A modern version of the renaissance scientist, the michaelangelo of his day.

      The hounding of him (because he was gay), arrest, loss of clearance, and subsequent suicide by cyanide in '54 was a shameful treatment of one of the most brilliant men in science this century.

      Simon.
      • by Anonymous Coward
        > one of the most brilliant men in science this century.

        Much as I hate to nitpick... :p
      • by crawling_chaos ( 23007 ) on Sunday November 16, 2003 @10:19AM (#7486882) Homepage
        While Turing's contributions to breaking Enigma were valuable, as the years slide on we find that his contributions may have been overstated to cover up other covert operations. Try reading Seizing the Enigma by David Kahn (of Codebreakers fame). It appears that the Enigma was also solved by some covert ops that seized monthly settings documents from Nazi weather ships and surrendered U-boats. For most of the war, hints like these were needed to get anything resembling real time Ultra, even with the bombes cranking away at full speed.

        Interestingly enough, the Luftwaffe was very careful with its settings documents and its discipline for changing rotors. Bletchley Park never solved the Luftwaffe version of Enigma.

        None of this should detract from Turing's greatness as a mathematician, but it appears that the British used his reputation to hide a few other facts. No need to alert your enemies to all of your methods, after all.

        • No need to alert your enemies to all of your methods, after all

          After Cryptonomicon, everybody and their uncle know how it's done ;o)
        • by CodeBuster ( 516420 ) on Monday November 17, 2003 @12:47AM (#7491256)
          Interestingly enough, the Luftwaffe was very careful with its settings documents and its discipline for changing rotors. Bletchley Park never solved the Luftwaffe version of Enigma.

          What a bunch of bullox! The following are excerpted from "The Ultra Secret" which was written by F. W. Winterbotham who worked closely with Allen Turing and the rest of his team at Bletchly Park throughout the war.

          "Although the well-guarded Kriegsmarine messages could not be deciphered, BP was regularly eavesdropping on the Luftwaffe. The Luftwaffe was particularly negligent in applying appropriate safeguards to their Enigma-coded messages, perhaps due to a measure of arrogance evident in World War II "fly-boys." Through this source the British were able to piece together Hitler's plans for the cross-channel invasion, dubbed Seelowe (Sealion). Before it could be accomplished, the RAF would have to be neutralized. Warned beforehand of Luftwaffe bombing raids on airfields, designed to eliminate not only the fields themselves but also destroy RAF fighters on the ground, British planes were able to avoid being caught as sitting ducks. Although Ultra intelligence forewarned of impending attacks, coastal radar (underestimated by the Germans) was able to pinpoint flights of incoming enemy planes."

          "The British were regularly reading Luftwaffe messages, Of particular interest were messages from the Fliegerverbindungoffiziere, or "Flivos", liaison officers responsible for coordinating air and ground operations The all important Kriegsmarine signals ("Dolphin") were still a mystery. U-33, on a mission to sow mines in the Firth of Clyde, was depth charged and forced to the surface on Feb 12, 1940 by minesweeper HMS Gleaner."

          "One of the first relied on German operators using some easily remembered sequence of letters as rotor starting positions. There were identified as "Cillies", after one operator who frequently used "Cilly", his girlfriend's name."

          Obviously you were misinformed about your chosen subject. The Kriegsmarine messages were the really tough ones to crack because they were disciplined about transmission lengths, randomized key rotor selections for each message, and distribution of code books which contained the key sequences that would be used in a particular month. By comparison the Luftwaffe operators used their girlfriend's initials as rotor settings and changed keys only infrequently.

    • by dido ( 9125 ) <dido&imperium,ph> on Sunday November 16, 2003 @09:57AM (#7486808)

      "On computable numbers, with an application to the Entscheidungsproblem" [abelard.org]" is unarguably the paper that began the field of computer science as we understand it today. Here we have the first descriptions of universal computing devices, Turing machines, which eventually led to the idea of universal stored-program digital computers. The paper even seems to describe, in what is unarguably the first ever conceptual programming language, a form of continuation passing style [ic.ac.uk] in the form of the "skeleton tables" Turing used to abbreviate his Turing machine designs. It's also relatively easy reading compared to many other scientific papers I've seen.

      Along with this we might also include Alonzo Church's 1941 paper "The Calculi of Lambda Abstraction" (which sadly does not appear to be anywhere online), where the lambda calculus, the basis for all functional programming languages, is first described.

  • Papers or books ? (Score:4, Informative)

    by Anonymous Coward on Sunday November 16, 2003 @08:46AM (#7486610)
    Often, when you're new to a given domain, there exists a book (on citeseer too...) that covers the domain and express, often better than the original authors, the main ideas.
    Then, you can use citeseer to see what's new and what's the fashion in the domain.
    Anyway, one of the best papers (and oldest) I read give birth to a whole community:
    http://cm.bell-labs.com/cm/ms/what/sha nnonday/pape r.html
  • by G4from128k ( 686170 ) on Sunday November 16, 2003 @08:50AM (#7486617)
    I wonder how many IT gurus are members of ACM [acm.org] or IEEE Computer Society [computer.org]? The % of /. members who are in ACM must be very small because ACM only has 75,000 members in total.
    • by mscheid ( 318333 ) on Sunday November 16, 2003 @09:05AM (#7486655)
      ACM [acm.org] and IEEE [ieee.org] are just the places I would look for such papers. The proceedings of ACM SIGCOMM [acm.org] for example are a very good "filter" for the flood of papers on networking.
    • I'll bet that the total number of /.'ers who have access to ACM / IEEE through their company or educational institutions, is probably quite high. So perhaps not many are personal members, but still have the some kind of association with ACM and IEEE!
    • Costs too much (Score:5, Interesting)

      by Tangurena ( 576827 ) on Sunday November 16, 2003 @02:11PM (#7488098)
      I used to be members of both societies. Annual student (I spent several years working on my masters) dues to obtain the magazines and journals from each one that interested me cost around $200 per year for each society, and a lot more for non-student dues. No company I have worked for in the last 10 years has been willing to underwrite professional society memberships, even though the written policies claim that they will.

      A recent short job assignment at HP let me run amok through the online libraries of both IEEE and ACM. It was interesting to see published articles from 5-10 years ago that directly covered topics that were the hot issues in the office today. Looking at the issues that were hot topics in the last few companies over the past 2 years, I saw the same pattern of scholarly articles being about 5-10 years ahead of the industry.

      While working in medium to larger companies, I would find the number of people who did not even understand simple concepts of Computer Science frightening.

      I am curious as to how much effort is wasted reinventing the wheel. I know a lot, because as a programmer on death march projects, I rarely have the hours to devote to finding how other people solved the same problem 5-30 years ago. That pointy haired boss breathing down my back thinks that any time not spent slaving over a hot keyboard is a waste of time. As the old saying: it is hard to remember the job is to drain the swamp when you are up to your armpits wrestling with gators. No amount of showing that spending a few hours sharpening the saw each week could save far more time that what appeared to be wasted. One past job allowed some time to be billed to research each week until some phb wandered by to bitch about it. It was the appearance of goofing off reading that made the boss look worse than the schedule slipping. And appearances appear to be more important in today's economy than actual results.

  • Classic papers (Score:5, Interesting)

    by thvv ( 92519 ) on Sunday November 16, 2003 @08:55AM (#7486628) Homepage
    "The UNIX Time-Sharing System," by Dennis Ritchie & Ken Thompson, is one of the best-written papers ever. The elegance of thought and economy of description set a standard we should all aspire to.
    http://cm.bell-labs.com/cm/cs/who/dmr/cacm.ht ml

    I list several more classics on my "Software Engineering Reading List" page at
    http://www.multicians.org/thvv/swe-readings.ht ml
    • Re:Classic papers (Score:3, Informative)

      by F2F ( 11474 )
      Later surpassed by "Plan 9 from Bell-Labs", which distills the ideas from UNIX and improves in many areas it lacked:

      Plan 9 from Bell-Labs [bell-labs.com]

      Somebody else mentioned Rob Pike already, pity you can't find any of his older (pre-Plan 9) papers online anymore: "The Hideous Name" and "Cat -v Considered Harmful":

      R. Pike, P. Weinberger, "The Hideous Name" USENIX Summer 1985, pp 563-568.

      and an abstract of the other: http://gaul.org/files/cat_-v_considered_harmful.ht ml

      As for history repeating itself, let me
  • by marsbarboy ( 625406 ) <georgebulmerNO@SPAMhotmail.com> on Sunday November 16, 2003 @08:56AM (#7486631)
    What about the work of Edsger Dijkstra? His seminal work on 'The GOTO statement considered harmful', the Shortest Path Algorithm, and the dining philosophers.
  • On the Synthesis OS. Beautiful work.

    There are so many good papers though. You just have to read them all :)

  • by cperciva ( 102828 ) on Sunday November 16, 2003 @08:58AM (#7486637) Homepage
    If nobody reads those "great old papers" any more, there's probably a reason. Sometimes the ideas have been superceeded; sometimes they weren't any good to begin with; often the papers are simply really hard to understand. The fact that people seriously suggest reading "great papers" reflects on the immaturity of the field; in a field like mathematics, hardly anyone ever reads the original papers (even for work done in the 20th century), instead opting to read someone else's simplification/clarification of the ideas.

    We speak of the TAoCP as "the bible", but I'm not sure if there are any "new" ideas there; rather, the value of TAoCP is as a compilation and exposition of all the best ideas other people have produced.

    Learn about great algorithms; don't worry about reading great papers.
    • Surveys (Score:2, Insightful)

      Some of the most interesting papers are actually surveys. From there will get the overview, often in a easy to read text, and pointers to the seminal papers. You also will know which are the relevant publications.
      Try browsing the ACM Surveys. I've read recently "A guided tour to approximate string matching". Quite good, and starting from there, I could get a good insight of the field.
    • by kfg ( 145172 ) on Sunday November 16, 2003 @09:52AM (#7486788)
      People don't read those great old papers anymore in the same way they don't read Euripides or Shakespeare anymore. They're difficult.

      Harlequin romance novels express the same ideas in much easier to read language.

      I didn't first learn my Special Relativity from Einstein's original paper. I learned it from Bertrand Russell's The ABCs of Relativity, but you can be sure that I later went back and read a translation of the original paper as well (and even poked at the original a bit), as I've also read Bohr, Bohm, Feynman and Weinberg.

      I've read The Blind Watchmaker and The Beak of the Finch. I've also read Darwin and Huxley.

      I've read modern histories of the Roman Empire. I've also read Gibbon.

      I've read C for Dummies. I've also read Kernighan & Ritchie.

      No, it wasn't always easy. I didn't expect it to be easy, or even desirable for it to be easy, because I expected to learn.

      Date is easier to read than Codd, but Codd is only hard until you understand the relational algebra. If you wish to be an expert in the field of databases understanding the relational algebra isn't really optional, no matter what your salary is.

      I'm learing to read classical Greek so that I may read Euripides. I've read most of Shakespeare and I'm working on the rest. I've never read a Harliquin romance novel. Elizabeth Peters mysteries are pretty nifty though, if you're willing to read some good works on Egyptology to get the most out of them.

      Your milage may vary, but I'll take the harder road and be better informed for it. You may settle for being a kind of craftsman/tradesman, I'm trying for scientist/artist and it puzzles me that most people in the computer field are functionally innumerate and desire that state of ignorance.

      Are we not geeks?

      No, I guess most of us are Devo.

      I think that's a bit sad.

      KFG

      • by seafortn ( 543689 ) <reidkr@nOSpAm.yahoo.com> on Sunday November 16, 2003 @10:39AM (#7486970)
        All I have to say, brother, is Amen! Anti-Intellectualism is the "cool" thing in too many fields today, and I think it'll eventually lead to a re-stangation of society, technology, and science - at least in America, where we'll be content to be a third-rate country so long as we can still buy McDonalds.
      • by handy_vandal ( 606174 ) on Sunday November 16, 2003 @11:25AM (#7487173) Homepage Journal
        I'm learning to read classical Greek so that I may read Euripides.

        An admirable exercise.

        Journalist I.F. Stone, rather late in life, taught himself ancient Greek, in order to read the actual source documents relating to the trial and execution of Socrates [umkc.edu].

        No translation would suffice: Stone felt that only by reading the original text for himself could he arrive at the insight he desired.

        -kgj
        • "No translation would suffice: Stone felt that only by reading the original text for himself could he arrive at the insight he desired.

          Precisely the point. The exercise is also teaching me a tremendous amount about written language in general and thus English, so the exercise is even currently relevant. My age isn't quite so advanced as Stone's, so I feel a bit free to take the slow road and examine the development of the Greek alphabet from the Phoenician along the way.

          I find this particular bit from th
          • All our basic problems are there in [classical Athens] miniature. -- I.F. Stone

            Exactly. The soul of man has not changed since the classical world.

            Good, evil, right and wrong, kindness and cruelty, peace and war -- details may change, interpretations may change, certainly the technologies change ... but in terms of our humanity, we are fundamentally the same as our ancestors.

            There is a terrible temptation -- especially in America, my home country, whose founders saw themselves as the spiritual succes
            • "What has been will be again, what has been done will be done again; there is nothing new under the sun. Is there anything of which one can say, "Look! This is something new"? It was here already, long ago; it was here before our time."

              Ecclesiastes 1:9
      • Sometimes what you say is true, that there are insights in the originals that have been lost. Other times they're just old.

        If the original is like this Codd you mention, where he makes a science out of something and other people distill it for popular reading, then yes, reading the original is likely to teach you something.

        But if the original is scientific, as are all of the books that build upon it, you're not likely going to learn a lot more about the state of the art today. You'll learn what it was lik
      • by K-Man ( 4117 ) on Sunday November 16, 2003 @03:06PM (#7488370)

        Harlequin romance novels express the same ideas in much easier to read language.

        They also have romantic, swooning sex by page 70.

        Are you listening, Don Knuth?

    • by Frodo2002 ( 595920 ) on Sunday November 16, 2003 @11:03AM (#7487076) Homepage

      I guess I have to challenge this one too. Of course ideas are superceeded or improved upon. Understanding is refined as the field matures... But here is an argument why you should do exactly the opposite to what you suggest:

      The historical development of ideas, from their first suggestion to their eventual refinement, represents a natural progression in human understanding and cognition. When you try to short-cut that cognitive development you are invariably left with weak, poorly formed ideas. Great old papers should be read so that you can gain insight into this development of ideas and it may help you understand things much better than before.

      This claim is difficult to back up with any sort of scientific test. As some evidence, one field of education (physics education) specialises in short-cutting the historical development of ideas and as we in the field know, teaching physics is a spectacular failure (though some would deny it). As a personal piece of evidence (does not count for much, but I don't have any other evidence at hand), I can say I never really felt entirely comfortable with Schrodinger's equation and its probabilistic interpretation until I went back and read Schrodinger's and Born's original papers. That is when I realised that Schrodinger's wave equation describes a wave in configuration space. Also, his subsequent fights with Bohr, where he tried to defend a matter wave interpretation of the wave function, reveal much about the type of ontological misclassification which humans fall into. Now isn't that amazing? Schrodinger spent a lot of time trying to defend an ontological standpoint that the wave function represented a material wave even though he was the person who derived the wave equation and should have known better. Is it any wonder then that my students, who don't even really understand where the wave function and wave equation come from, think that the wave function represents a material wave? I would have had none of this insight without reading the original papers.

    • If nobody reads those "great old papers" any more, there's probably a reason.

      Yep, but the reason is different than most people think...

      The papers lie unread because most 'computer scientists' aren't scientists, but "engineers" or at best a weird hybrid of the two. (Frankly I don't regard computer programmers as engineers but rather as artisans. The sucess of their efforts depends less on their tools and material as than as on their personal abilities.)

      They aren't working on new discoveries, or delvi

  • by Multics ( 45254 ) on Sunday November 16, 2003 @08:59AM (#7486644) Journal
    I ponder if we made a list of oh say 'n' of these if the typical /.er would read them.

    I've taught computer science. Specifically Software Engineering where there is about a 1" thick stack of around 15 papers that get the whole idea. Wonderful works like "Goto Considered Harmful" (Communications of the ACM, 11, p147-148, 1968) come to mind. But I don't think there's much hope the typical /.er will take the time and effort to read them better yet think about them.

    In the last couple of weeks /. as a culture came up as a lunch conversation between my co-workers and I. We came to the conclusion that the wild herd doesn't pay for stuff (Kazaa, Morphious, etc), is ADD (how many times have you read a posting where the poster hadn't read the link?) and generally thinks that education is mostly worthless (the bi-annual do I need a degree grudge match). Given these behaviors, why go through the effort of making a list?

    If I were working this space (putting my teaching hat back on) I'd cover:

    Computer Architecture (where all things come from)

    Theory of Computing including O() [& friends], analysis of algs, Turing, etc.

    Software Engineering

    Software Testing

    Graphics

    Databases

    Numerical Methods

    Simulation (& Statistics)
    and

    Systems Analysis (where apparently all books currently suck)

    I think that would be the place to start and there would be more than 10 or 20 of them.

    -- Multics

    • Where they are very valuable is in establishing 'prior art'.
    • by orthogonal ( 588627 ) on Sunday November 16, 2003 @09:22AM (#7486702) Journal
      I ponder if we made a list of oh say 'n' of these if the typical /.er would read them....
      We came to the conclusion that the wild herd [on Slashdot]... generally thinks that education is mostly worthless....
      If I were working this space (putting my teaching hat back on) I'd cover:....


      So put your money (time is money) where your mouth is.

      Seriously. Email one of the Slashdot editors, get a section called "Slashdot Tells", and post your first lecture, along with assigned reading.

      Let the /. "wild herd" post questions and comments, and let them moderate up the ten or fifteen most important questions for your perusal.

      Come back the next week, post your answers and your next lecture, and let those who can demonstrate mastery of your earlier lecture and the assigned reading go through the cycle again.

      I'll take part in whatever you care to teach, and I'd wager you'd get a core group who would follow the lecture series through.

      Use a free e-text (such as the MIT open courseware), or some GFDL book, as your text.

      What's in it for you? Well, teaching is the best way to learn (or re-learn). Keeps the mind supple. Not to mention the satisfaction of passing on what you know.

      And telling your collegues you've learned to herd cats.
      • You forgot to quote

        We came to the conclusion that the wild herd doesn't pay for stuff

        ...and then proceed to suggest that he offer what is basically a college CS course for free.

        I'm not saying it's a bad idea (though that is a LOT of work on his part), but it sure is validating his observation about paying for stuff....

        • by orthogonal ( 588627 ) on Sunday November 16, 2003 @10:53AM (#7487038) Journal
          You forgot to quote

          We came to the conclusion that the wild herd doesn't pay for stuff ...and then proceed to suggest that he offer what is basically a college CS course for free.


          Good point.

          I suppose I could counter with "Doesn't he use any open source software? Think of the course as giving back for the kernel" or something, but that would be disingenuous.

          But I do have an idea about payment.

          Unfortunately, my idea won't put any money in his pocket. (No, it doesn't involve collecting underpants, either.)

          It's more a pay it forward type idea: train people, and then send them forth to train others.

          It's a good model for accelerating a meme, but perhaps too envangelistic for the Intellectual Property world.

          I'd like to set up some form of co-operative education, where small and easily learned skills (not as complex as what our OP proposes to teach) are taught to small groups, with each learner undertaking to teach another small class to pay his "tuition".

          I think there are some advantages to this model, not the least being that the best way to learn something -- to really learn it and make it part of yourself -- is to teach it.

          There are some problems with it too, but I've got a sketch of some ideas to overcome of the obvious problems.

          Of course, it's not a new idea: it's how cults have always spread. The question is, can it propel mundane learning as well as it propels sacred ideas.

          But it's not really a payment scheme. It won't pay the grocery bill.

          Maybe we should sell tickets?
    • Is your analysis of /. as a generalized mass of mentally-deficient thieves, provided without any actual information, indicative of the general quality of your teaching?

      If nothing else, maybe it clarifies the source of underlying conflict.

      (YHBT?)
    • Re: (Score:3, Insightful)

      Comment removed based on user account deletion
    • You sir, are ignorant to assume that degree == (education || learning). A degree is a piece of paper from an institution that indicates that you are not a complete bafoon, and that you are employable. Nothing more. I've considered the validity of my desire to finish school every semester - and before that, every year of high school. However, do I hold education in low regard? No. Quite the contrary - to the extent that I have been continually lauded by teachers, peers, professors, and others for my knowledg
      • You Sir represent *exactly* the attitude the original poster was decrying.

        My gripe with the education system - and particularly higher institutions of learning, which should know better - is that they dumb the stuff down for the least common denominator,

        Which means you are in the wrong institution, not that the institution is at fault. But then getting into a institution where the curricula is hard, is of it's itself difficult.

        can't think of an interesting way to teach it for the life of them

        Last I che

      • I know this is news to most people, but computer science is not about programming. If you want to learn how to program, how to develop large systems and databases ... study the commerce subject "Information Systems".

        Computer science is about the science of computing. It is about understanding and advancing the state of computing through the advancement of computing theory.

        The fact that practical application or emperical testing of the techniques requires computer science students to have rudimentary p

  • if it's not on Google it does not exist.
    uh....right?.
  • Does anyone know of a website where you can get access to comp sci and comp eng papers and stuff? I'm speaking as a normal person, as opposed to a student (ie. something free, doesn't require university resources, easy to access, etc). Searching on google is well, not my idea. I'm wondering if there is a central repository or something that tracks things. For example, let's say I want to read up on AI, where do I go? There are places like this for other stuff (eg. physics, astronomy, medicine, etc) but have
  • by acidblood ( 247709 ) <decio@@@decpp...net> on Sunday November 16, 2003 @09:02AM (#7486650) Homepage
    McCarthy's paper on Lisp: Recursive Functions of Symbolic Expressions and Their Computation by Machine (Part I) [stanford.edu].

    For a refreshing analysis of the paper by Lisp guru Paul Graham (the same guy who proposed the idea of Bayesian anti-spam filtering), see The Roots of Lisp [paulgraham.com].
  • by ljavelin ( 41345 ) on Sunday November 16, 2003 @09:07AM (#7486658)
    You betcha. There has been a lot of research over the past 50 years, and much of it ignored - especially research that isn't in English.

    A lot of old research is interesting in terms of Patent law. A lot of this research can be used to invalidate patent cliaims - prior art. An idea published 30 years ago simply cannot be legitimately patented now.

    Very recently my Dad told me about a new patent assigned to one of his competitors. But my Dad claimed that his colleauge didn't patent that very idea in the 1970s because my dad knew of prior art - my dad had heard a researcher from Germany talk about the same thing at a small conference.

    Given prior art, my Dad and his colleauge didn't apply for patent back then. But 35 years later, a company patented the idea. My Dad was pretty pissed!

    So Dad and I shlogged through tons of (paper) documents and LoC and other resources trying to help him remember who the speaker was and where the conference was held. After a few weeks of digging, we got a copy of the (hard to locate) conference proceedings, and now that brand new patent looks like it's toast.

    Now here's the rub - the only reason why this patent was invalidated was because my dad is still in the industry - and he's well over retirement age. Everyone else my Dad works with thought the patent would toast them. Only my dad, and old researcher with a good memory, could help his company overcome the (invalid) patent. What if my dad was retired? What if he didn't attend that talk in the 1970s? Most people simply wouldn't have known where to look for the prior art. [And not every call for prior art is suitable for Slashdot.]

    Old research and old researchers are good - not only for disposing of "new" patents, but for the value of the efforts and lessons learned. So much is forgotten.

  • by acidblood ( 247709 ) <decio@@@decpp...net> on Sunday November 16, 2003 @09:09AM (#7486664) Homepage
    ...but Claude E. Shannon's paper, A Mathematical Theory of Communication [bell-labs.com] has changed our outlook on information and communication. The importance of this paper on modern communication cannot be stressed enough, and it is very readable. If I had 10 papers to take to a desert island, surely this one would be on my list (:
  • by acidblood ( 247709 ) <decio@@@decpp...net> on Sunday November 16, 2003 @09:13AM (#7486675) Homepage
    This has been reported in Slashdot a while ago, but it deserves another mention: the manuscripts of Edsger W. Dijkstra [utexas.edu]. There are more than a thousand documents written by Dijkstra in this archive, and very interesting ones too -- careful or you'll lose days browsing it like I did.
  • Quantum Computation (Score:3, Informative)

    by acidblood ( 247709 ) <decio@@@decpp...net> on Sunday November 16, 2003 @09:16AM (#7486683) Homepage
    While not exactly classic papers, some of these may be regarded as classic by our grandchildren when the time comes, since they're at the forefront of computer science's research today. A good introduction to quantum computing was recently linked in a Slashdot story posting: The Centre for Quantum Computation's Tutorials [qubit.org]. Very, very interesting reading, if a bit advanced.
  • by A. Brate ( 588407 ) on Sunday November 16, 2003 @09:17AM (#7486685) Homepage Journal
    This is shameless self-promotion, but you should read my book [amazon.com]!

    Technomanifestos discusses the truly thought-provoking, inspirational, seminal computer papers of the 20th century [technomanifestos.net], from Turing's "On Computable Numbers" and "Computing Machinery and Intelligence", to Alan Kay's "Personal Dynamic Media" to Larry Wall's States of the Perl Onion.

    The book delves into the historical, biographical, and scientific context of works such as these and follows the thread of inspiration to today's world. If you want to know where the Internet germinated, or how Marshall McLuhan and Pierre de Chardin influenced the World Wide Web (or even who McLuhan and de Chardin are!) you should pick up my book. And then read it.

    Technomanifestos tracks the evolution of the MIT hacker, from the dapper Boston Brahmin Vannevar Bush to the famously unkempt Richard Stallman, and introduces the cast of lesser-known (to the non-Slashdot world) but crucially inventive individuals such as Ivan Sutherland and Seymour Papert.

    Moreover, it discusses how the truly great computing ideas come from people who recognize that technology, especially information technology, has the power to transform people and society--these are (in the words of similarly great books) tools for thought [amazon.com] and dream machines [amazon.com].

    Or if you have no interest in helping me pay my DSL bill, you can go straight to the sources [technomanifestos.net], many of which are available online.

  • Citeseer (Score:3, Informative)

    by p-p-pom ( 716823 ) on Sunday November 16, 2003 @09:18AM (#7486688)
    Citeseer was cited in the blurb, but a really nice service that they provide is the Computer Science Directory [nec.com]. There you can look for papers sorted by domain, and ranked by several criteria like "authority". The top papers are usually a good read if you are interested in a particular domain.
  • by acidblood ( 247709 ) <decio@@@decpp...net> on Sunday November 16, 2003 @09:28AM (#7486720) Homepage
    Though it has very few entries, and is no longer updated, there are at least two papers in that list that the typical Slashdotter may have heard about: Go To Statement Considered Harmful [acm.org], by Dijkstra, and Reflections on Trusting Trust [acm.org], by Ken Thompson.

    The remaining ACM Classics of the Month are here [acm.org].
  • by linuxislandsucks ( 461335 ) on Sunday November 16, 2003 @09:30AM (#7486726) Homepage Journal
    I think Alan Kay would agree that not all CS papers of a worthy read are in CS..

    Information Theory was certainly not in CS when it was orginally written in the 1940s..instead it was in Telcommunications and Mathematics :)

    Basically the areas you shoudl be looking at are:

    Logic
    Philosphy
    Mathematics
    Physics
    Bilogy
    Chemistry

    For example the concept of meta data.. ie data that has diferent menains based on context is common in all these areas and has direct applicatiosn to CS! Some of the current concepts of Semantic web are from this area and started in Language studies..:)

    Rmember the old adage of the crusty old CS professor that CS is multidisplinary still applies! :)

    Fred Grott
    ShareMeTechnologies-The Mobile Future
    http://www.jroller.com/page/shareme/Weblog
    #11: Thou shalt not covet thy neighbor's tagline.
  • I'm not sure if academics consider it a classic, but many will be interested in reading about the beginnings of UNIX: The UNIX Time-Sharing System [regehr.org], by Dennis Ritchie and Ken Thompson.
  • Since it hasn't been mentioned before, Clifford Stoll's paper "Stalking the Wily Hacker" (CACM 1988:31:484-497) is a classic that should be included in any list of influential papers.

    That being said, here's a question: has anyone published an anthology of classic CS papers? I'd love to have in one volume examples of the classic work by Von Neumann, Turing, Ritchie, and the rest of the gang. Has such an anthology been published? If so, I'd buy one in a heartbeat.

  • by brentlaminack ( 513462 ) on Sunday November 16, 2003 @09:39AM (#7486754) Homepage Journal
    Reference: Jerome H. Saltzer, and Michael D. Schroeder. The Protection of Information in Computer Systems. (invited tutorial paper) Proceedings of the IEEE 63, 9 (September 1975) pages 1278-1308. Reprinted in David D. Clark and David D. Redell, editors. Protection of Information in Computer Systems. IEEE 1975 CompCon tutorial. IEEE # 75CH1050-4. Also reprinted in Rein Turn, editor. Advances in Computer System Security. ArTech House, Dedham, MA, 1981, pages 105-135. ISBN 0-89006-096-7 Also reprinted in Marvin S. Levin, Steven B. Lipner, and Paul A. Karger. Protecting Data & Information: A Workshop in Computer & Data Security. Digital Equipment Corporation, 1982. This paper was originally prepared off-line. In 1997, Norman Hardy kindly rendered it into World-Wide Web form. here [mit.edu]
  • Here are three. Not the top three, not the only three, but definitely an important three. Maybe someone else will have better luck tracking down a link to Mogul's paper.
  • Donald E. Knuth (Score:5, Interesting)

    by roffe ( 26714 ) <roffe@extern.uio.no> on Sunday November 16, 2003 @09:44AM (#7486771) Homepage

    Donald Knuth has written a lot of interesting papers, but his paper on TeXs line-breaking algoritm

    • Defines the state of the art in digital typesetting
    • Is a textbook example of how a scientific paper should be written: it outlines the history of the problem, gives historical and current examples, defines the problem statement and discusses the suggested solution.

    and as far as I know, the algoritm is still state of the art and is used only by TeX, InDesign and an addition to QuarkXPress.

  • Some suggestions (Score:3, Informative)

    by offpath3 ( 604739 ) <offpath4@ya h o o . c o .jp> on Sunday November 16, 2003 @09:46AM (#7486775)
    If you want a mind bender, there is always On the Duality of Operating System Structures [stanford.edu]. But if you want something a little more practical, I'd recommend Eliminating Receive Livelock in an Interrupt-Driven Kernel [stanford.edu] or The End to End Argument in System Design [stanford.edu].
  • Many "classic" papers are reissued in "Readings In..." volumes--check on Amazon for your favorite subject area. Also, Citeseer ranks papers by popularity; that's not necessarily an indication of either quality or significance, but it is another measure of interest. Then, ask your colleagues, friends, professors, fellow students for recommendations.

    You can also do some digital archaeology: a lot of decades-old ideas are embodied in software you can download. You can get copies of MIT's ITS, TENEX, Smallta
  • by skaya ( 71294 ) on Sunday November 16, 2003 @09:52AM (#7486789) Homepage
    As a PhD student, I often have to look for papers in the computer science field ; and very often, CiteSeer yields better results - or, rather, different results, but with a very good cross-referencing system. You can directly jump to the other papers cited by the paper you're reading, and you can see which papers did cite it, too.

    The URL : [nec.com]
    http://citeseer.nj.nec.com/cs

    That said, I often find very interesting ideas in scientific papers, but sometimes things can't be implemented with current technology (I'm still talking about computer science domain, since that's what I know), or sometimes, the good idea in the paper is obsoleted a few years later.

    For instance, I remember a scheduling algorithm to read disk blocks in a Video-On-Demand server : it was maybe very clever when it was written, when they had to feed 155 Mbps with a computer having 16 MB of RAM, but today, you have maybe 10 times more throughput, but 100 times more RAM - so you can use simpler, memory-hungry, buffering methods.

    The problem is, that it's difficult (IMHO) to say "OK, this paper is theoretically interesting, but we can't implement this today, BUT we will probably be able to do it in a few (dozen) years", because you don't know what will and won't evolve (in my previous example, it was easy to predict that network bandwidth and memory size would increase, but it was maybe harder to guess that MPEG4 and DivX would allow the bitrate of a video stream to stay low...)
  • It looks like an AT&T researcher "invented" sublists as a way to defeat duplicate detection filters as part of a research project. The patent reads like a research paper, with various theorems and corollaries to prove how various methods of filtering spam by duplicate detection are ineffective and that spammers have the upper hand against those methods.
  • by hubertf ( 124995 ) on Sunday November 16, 2003 @10:07AM (#7486844) Homepage Journal
    Check out the Networked Computer Science Technical Reference Library [ncstrl.org].

    - Hubert
  • by Henry Stern ( 30869 ) <henry@stern.ca> on Sunday November 16, 2003 @10:37AM (#7486962) Homepage
    Since nobody who seems to have actually read any computer science papers has posted, here are two that immediately come to my mind.

    Vannevar Bush. As We May Think [vub.ac.be]. Atlantic Monthly, July, 1945.

    This paper put forth the very first ideas about how people can mechanically search for information. While we don't have desks with levers on them, we do have Google. :)

    Tim Berners Lee. Information Management: A Proposal [w3.org]. 1989.

    This paper is where Tim Berners Lee proposes what we now know as the world wide web. It's an interesting read if you'd like to see what the original intent of the web was so that you can compare it to what we have today.

    A place to look for good old computer science papers is in older issues of Communications of the ACM. There are lots of articles in plain English that you may find of interest. If you are a university student, your school may have a subscription to the ACM Digital Library. If they do, you can read all the issues back to 1958.

    Also, you can find a lot of interesting CS publications at Citeseer [nec.com]. They have a page with the top 200 most accessed papers of all times [nec.com]. When I skimmed through it, I saw quite a few titles that may be of interest.
  • There are a number of sources for such things. If I was looking for a list of great papers I would look in Citations Classics - papers selected based on thier frequency of citation by other papers. I would also look in the bibliographies of text books and dissertations. Articles in review journals are also expected to have very strong bibliographies.

  • NTRS [nasa.gov] - Enjoy!
  • Isn't it possible to seach citeseer for the most cited papers? Heck, couldn't someone do a pagerank-esque regression over the referrences to find the authoritative papers in CS? I know there's a best number of citation search out there.
  • Dave Parnas is rarely mentioned as one of the "great computer science", but his ideas have been very influential in software engineering. He wrote on information hiding and separation of concerns well before object-oriented programming existed. His discussion of "undesired events" was a forerunner to exception handling mechanisms. He wrote of families of software products decades ago which is only now being actively pursued under the term "product lines".

    A number of his papers have been collected into a bo
  • by e_lehman ( 143896 ) on Sunday November 16, 2003 @11:43AM (#7487263)
    This [nec.com] is the paper by Diffie and Hellman that originated public-key cryptography. This paper explained for the first time (in an unclassified place) how two parties could communicate privately over an open channel without previously agreeing on a secret key. Every time your browser says, "Setting up a secure connection..." when you order from Amazon or check your bank account, you're witnessing the impact of this work.
  • by xeo_at_thermopylae ( 585330 ) on Sunday November 16, 2003 @01:09PM (#7487740)
    Ivan Sutherland's Sketchpad was the first realization of object-oriented programming. As you read it you see OOP come to consciousness. Sutherland's dissertation is available online at Sketchpad, A Man-machine Graphical Communication System[HTML] [216.239.41.104] or Sketchpad, A Man-machine Graphical Communication System[PDF] [cam.ac.uk]. It was originally submitted at M.I.T. in 1963.

    In the section titled GENERIC STRUCTURE, HIERARCHIES , Sutherland describes how he restructured SKETCHPAD in what we would immediately recognize as an OO manner:

    "The big power of the clear-cut separation of the general and the specific is that it is easy to change the details of specific parts of the program to get quite different results or to expand the system without any need to change the general parts. This was most dramatically brought out when generality was finally achieved in the constraint display and satisfaction routines and new types of constraints were constructed literally at fifteen minute intervals." ... "Before the generic structure was clarified, it was almost impossible to add the instructions required to handle a new type of element."

    Later in the section DEMONSTRATIVE LANGUAGE we see what we might call today the association of classes with methods as Sutherland notes:

    "The organization of the demonstrative program in Sketchpad is in the form of a set of special cases at present. That is, the program itself tests to see whether it is dealing with a line or circle or point or instance and uses different special subroutines accordingly. This organization remains for historical reasons but is not to be considered ideal at all. ***A far better arrangement is to have within the generic block for a type of picture part all subroutines necessary for it.***" [asterisks mine].
  • by JAS0NH0NG ( 87634 ) on Sunday November 16, 2003 @01:12PM (#7487768)
    A lot of people have covered a lot of great areas in computer science. Here's a short annotated list I've put together for an often-overlooked area, human-computer interaction.
    • As We May Think [theatlantic.com], by Vannevar Bush. Bush was the Director of the Office of Scientific Research and Development, basically the precursor to NSF and DARPA. In this magazine article, he observed the problem of disseminating information, and noted that electronics may be a better medium (keep in mind that this was written in 1945). He also outlines what he calls the Memex, the first description of a hypertext machine. Bush's theme is that we need to create devices that will make it easier for us to store and access information, and ultimately solve problems better.
    • Sketchpad, by Ivan Sutherland. Couldn't find a link to a video, but this truly is one of the seminal papers in computer science. This paper introduced the first graphical user interface (graphical as in graphics, not windows and mouse), the first object-oriented system, the first zooming interface, and the first constraint solver. Best quote:
      "I once asked Ivan, 'How is it possible for you to have invented computer graphics, done the first object oriented software system and the first real time constraint solver all by yourself in one year?" And he said "I didn't know it was hard." -- Alan Kay on Ivan Sutherland.
      The embarassing part is that, although this was done in the early 1960's, Sketchpad still looks cool and useful today.
    • Doug Engelbart's 1968 Demo [stanford.edu]. The link points to a video collection, which is easier to read than his papers. Engelbart is not the most exciting speaker, but keep in mind that in 1968 that people were still stuck using terminals and punchcards. What does he show them? The first mouse. The first hypertext implementation. The first use of video-conferencing. The first online help system. The first interactive word processor. Obviously a mind-blowing experience if you were there. As many people have said, this is the mother of all demos, and we still have not achieved many of his visions today.
    • The Computer for the Twenty-First Century [ubiq.com] by Mark Weiser. Although this was written in 1991, I think that this might be the most important paper of the 1990s. Why? Keep in mind that in 1991, people were still using desktop PCs, that wireless had not achieved momentum, and that sensors were very few and far between.

      So what is the basic idea? That computers should not be constrained to the physical desktop, but should become an everyday and seamless part of our lives. And in this paper, Weiser and his team at Xerox PARC introduced location-based computing; devices of all form factors, from small PDAs to tablet PCs to electronic whiteboards; sensors for integrating the physical and virtual worlds; wireless networking to make it all connected no matter where you were (in their office building anyway). Weiser's vision is so influential, that there are now (literally) thousands of researchers working on what he called ubiquitous computing, as well as several research conferences devoted to this theme, not to mention the direction that the commercial world has already taken with PDAs, WiFi, sensor networks, and so on.

  • Djikstra (Score:3, Informative)

    by Ian Bicking ( 980 ) <(moc.ydutsroloc) (ta) (bnai)> on Sunday November 16, 2003 @02:47PM (#7488276) Homepage
    I found this paper (note?) by Djikstra quite interesting: The Programming Task Considered as an Intellectual Challenge [utexas.edu]

    It talks about software quality and testing -- which seem very applicable, if not entirely in sync with, recent ideas about agile programming, test-driven-development, etc.

  • by Tony.Tang ( 164961 ) <slashdot@@@sleek...hn...org> on Sunday November 16, 2003 @03:39PM (#7488572) Homepage Journal
    This is not a paper, but a video that was done in the late 60's [stanford.edu]. In it, you'll see many UI concepts that you see being "discovered" now.

    For instance, he has the very first mouse, a word processor with cut, copy, paste, embedded graphics (remember how cool OLE seemed to be?), hyper-linking (remember how cool hypercard seemed to be?), embedded levels of text (kind of like looking at a hyper-linked table of contents in a book), multi-handed interface, a piece of groupware that allows him and a distant co-worker to work together in the same application (think collaborative real-time modification of the same document -- something we still don't really have), telepointers (graphical representation of other people's mouse pointers), embedded video (think webcam), and the list goes on and on and on.

    When you think about the fact that this was done in the 1960's, you really begin to wonder, "what the hell have we been doing since then!?"

  • by DrSkwid ( 118965 ) on Sunday November 16, 2003 @03:41PM (#7488584) Journal
    http://www.cs.bell-labs.com/who/rob/ [bell-labs.com]

    be sure to catch "Systems Software Research is Irrelevant"

    You will probably see a lot worse links than :

    Bell Labs [bell-labs.com] - formerly known as heaven.

  • by xenocide2 ( 231786 ) on Sunday November 16, 2003 @04:33PM (#7488919) Homepage
    I know a lot of people are going to suggest the Turing papers, and other more impact-of-computation-on-society type papers. Of course, they might be better off mentioning Seymour Papert [papert.org], but I'd rather focus on some papers a little more concrete.

    One of the problems with looking for original papers on CS is that the earliest were intensively focused on mathematical notation-- from the 1930's! For example, famous mathematician Church is accreddited with the definition of the lambda calculus denoting functions, which classes about programming languages use heavily. During such a class, our professor introduced us to a few papers, "Definitional Interpreters for Higher-Order Programming Languages" by John Reynolds. The paper was originally published in 1972, so I'm not sure how he got ahold of it. But it's a great survey of the topic. If you're really interested in a specific topic, the easiest way, I find, to find foundational papers is to find a textbook on the topic with a thick bibliography. Then just try to trace out the citation geneology to an appropriate root. Eventually you'll work your way from something like "Designing autonomous robots to work independently in cellular networks" to something like "cooperative robotics." In this quest, Cite Seer can be a great tool. But it makes a poor starting place, as you mentioned.
  • by Pampaluz ( 163324 ) <(pampaluz) (at) (cox.net)> on Monday November 17, 2003 @12:53AM (#7491269) Journal
    Here is a neat site that I found (yep, using Google):

    Great Papers in Computer Science:
    http://bit.csc.lsu.edu/~chen/GreatPapers.html [lsu.edu]

    I kept trying to put the TOC from the site in this comment, but Slashdot kept saying that the line length was too short. Since it was just plain text, I do not understand what was going on with that. So sorry, but the link really is worth checking out. Good reading!
  • Good CS Reading List (Score:3, Informative)

    by eludom ( 83727 ) on Monday November 17, 2003 @07:17AM (#7492146) Homepage
    http://john.regehr.org/reading_list/

One way to make your old car run better is to look up the price of a new model.

Working...