Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

What's Wrong with Unix?

Cliff posted more than 9 years ago | from the defects-and-potential-solutions dept.

Unix 1318

aaron240 asks: "When Google published the GLAT (Google Labs Aptitude Test) the Unix question was intriguing. They asked an open-ended question about what is wrong with Unix and how you might fix it. Rob Pike touched on the question in his Slashdot interview from October. What insightful answers did the rest of Slashdot give when they applied to work at Google? To repeat the actual question, 'What's broken with Unix? How would you fix it?'"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered


Several frustrating points (5, Insightful)

SIGALRM (784769) | more than 9 years ago | (#11203936)

What's wrong with UNIX? Depends on which perspective you start...

In my opinion, here are some headaches that have plagued a wary UNIX engineer or two:

IEEE and Posix, X/Open, etc. provide a basis for standardizing UNIX interfaces, but adherence tends to be spotty

Difficult to implement a microkernel architecture

XPG3 aside, a de facto "common API" has never really been acheived

In many cases, code scrutiny is difficult or impossible

Progress and innovation tends to occur within the context of aquisitions (i.e. UnixWare)

The COFF symbolic system is terrible (OK, I know it's a deprecated, but still...)

PIT initialization (time management)

Kernel tuning (anyone fiddled with the /etc/conf/cf.d subdir on OS5?) These are just a few things, in my experience. That said, UNIX has had some great days.

Re:Several frustrating points (3, Interesting)

XaviorPenguin (789745) | more than 9 years ago | (#11203965)

Maybe I am just stupid but it is kind of hard to install. I am a Unix newbie. I have downloaded FreeBSD and almost had it installed but I have failed many times, therefore going back to Linux or even Windows. If they can make the installers just a bit more user friendly like this [bsdinstaller.org], then I am all for it.

KSpaceDuel (5, Funny)

jkauzlar (596349) | more than 9 years ago | (#11204016)

Certainly this component of Linux needs rewritten. Firstly, it is far too difficult to maneuver your ship with the gravity the way it is, and secondly, the bullets go too slowly. Thirdly, it isn't intuitive what the different colored blobs are; its easy to forget what is energy and what is a mine, or something like that.

I would suggest to the KSpaceDuel team that they meet with the KAsteroids team to discuss usability issues. There should also be a cap on how fast you can go, since it is possible to speed up so fast that your spacecraft appears to be moving very slowly (sort of like a tire in motion).

Re:Several frustrating points (5, Insightful)

Anonymous Coward | more than 9 years ago | (#11204050)

In addition:

  1. Crappy filesystem. Resier4 or XFS is what UNIX should have started with and even now we don't have file versioning.
  2. POSIX permisions suck. The suid bit sucks even more. ACL's make more sense, and UNIX should have had them from the start. If we're doing it now, capabilities would be even better.
  3. IPC primitives are poor. SySV shared memory goes some way to helping, and UNIX domain sockets are O.K, but a proper message/event marsheling system would eclipse them all.
  4. The filesystem hierachy is an awful mess. Non-standard across all unices and poorly evolved to cope with modern systems. /etc was a horrible copout and it shows. UNIX needs proper application packaging with proper self-contained application packages.
  5. Providing lots of little applications to do specific tasks was the best idea ever, but not providing a decent scripting language to bind them together was a bone-headed mistake. Likewise not standardising some basic data-interchange formats (Even it was just pre-formated ASCII) just makes piping all those little tools together to do anything useful a pain.

Re:Several frustrating points (2, Informative)

Anonymous Coward | more than 9 years ago | (#11204121)

Let's make UNIX not suck [ximian.com] by Miguel de Icaza. Answers this exact question in quite a lot of detail! Try out the UNIX Haters handbook [mit.edu] [warning: big PDF] for a more humourous take on things!

Re:Several frustrating points (1, Funny)

Anonymous Coward | more than 9 years ago | (#11204117)

My good man, all these and more are already fixed in The Hurd!!! Free4Life, baby, yeah!

-- RMS

FP! (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#11203937)


OS X (5, Insightful)

BWJones (18351) | more than 9 years ago | (#11203943)

Based upon my experience with IRIX and Solaris (with some Linux), I would have to say that most of the things that *NIX did poorly have been rectified with OS X. I would have said OS X was still lacking true 64bitness, but that is coming in 10.4 rather quickly now. The numbers of Macs involved in secure and classified work in the Federal government have been exploding and high bandwidth networking options for cluster computing have also been resolved with options such as Infiniband. Development issues have been streamlined with rather nice tools from Apple itself obtained via NeXT. Open standards are being embraced just about everywhere you turn in OS X, a true plug and play environment now exists (I am reminded of the last video card install on my SGI O2 which had me down for two days solid), the GUI is consistent and the CLI is present and fully integrated with the GUI as well. Additionally, more and more networking options are being supported natively within OS X which is one of the last hurdles to true interconnectivity cross platform. And the G5! Oh, the G5 is a wonderful bit of hardware with which to run *NIX on.

Problems that remain are being able to create one seamless environment with shared memory and such, but the rest of the *NIX world is still having those problems as well.

You can argue about the specifics and details of many things, but in terms of a UNIX workstation, OS X pretty much has it all for our needs.

But open-ness is now also a requirement (1)

Ars-Fartsica (166957) | more than 9 years ago | (#11204015)

Functionality is not the only criteria for evaluating an OS. Access to source code is also an important criteria that OSX mostly fails (yes I know about Darwin, it is only a piece of OSX).

If openness were not a requirement, we would not have the OS landscape we have today, we could have enabled rock-solid computing with any number of non-free alternatives like VMS, AIX, etc

Re:OS X (0, Flamebait)

stupidfoo (836212) | more than 9 years ago | (#11204119)

I would have to say that most of the things that *NIX did poorly have been rectified with OS X

I would imagine that the question pertains more to use as a server OS than a easy to use, pretty widgets OS.

Re:OS X (0)

Anonymous Coward | more than 9 years ago | (#11204146)

And what do you believe the parent is talking about? Most of the things the parent talks about have everything to do with a server OS.

needs some VMS stuff (5, Interesting)

nocomment (239368) | more than 9 years ago | (#11203947)

I like Unix, but I think I'd add some VMS stuff. Like a Delete attribute. VMS you can set people to have read/write/execute and delete. in unix if people have write, they can write it to "null" *grumble*.

Re:needs some VMS stuff (1)

pclminion (145572) | more than 9 years ago | (#11204011)

So your argument is that if people are allowed to overwrite stuff, then they can overwrite stuff? I don't get it.

Re:needs some VMS stuff (1)

nocomment (239368) | more than 9 years ago | (#11204074)

no not at all. For example, a place where I would love this is on my netatalk servers. Users connect from their Mac's all the time and occasionally will accidentally delete an entire directory. On VMS you can specify that people can modify, can move, read, can execute, but cannot delete it. So on my 4-cluster rsync'd netatalk cluster (the stuff on it is _REALLY_ important) when someone deletes a folder they just get a message that says they can't do that. I think it's POSIX that specifies the permission attributes though, so I don't think it's UNIX's fault per se, it would still be nice.

Re:needs some VMS stuff (4, Interesting)

pclminion (145572) | more than 9 years ago | (#11204123)

My point was, why protect against somebody "deleting" a file when they can just overwrite it with zeros? It's the same thing, right?

If you really want the kind of behavior you are talking about (although I can't imagine why), you can do it by making a hard link to the file in question into a directory which is "safe" from the user you are protecting against. They are still able to move the file around, modify it, etc. But if they delete it, the second hard link still remains, so the file is not actually deleted.

Re:needs some VMS stuff (1)

Krunch (704330) | more than 9 years ago | (#11204079)

I don't know for other unices but Linux's ext2/3 let you set attributes to files so that you can only append to them (see chattr(1)). However it can't be set on a per-user basis. I think even root need to reset the "append mode" before being able to truncate it.

Re:needs some VMS stuff (1)

Mr Pippin (659094) | more than 9 years ago | (#11204090)

Sounds like you want the "append" attributed, which already exists for some Linux filesystems.

Re:needs some VMS stuff (1)

hey (83763) | more than 9 years ago | (#11204102)

So what should write permission mean. That you can only write the file down to 5 bytes in size? But no less?!

Re:needs some VMS stuff (1)

Tet (2721) | more than 9 years ago | (#11204109)

I think I'd add some VMS stuff. Like a Delete attribute. VMS you can set people to have read/write/execute and delete.

Why? If you have write access, then you can completely trash the file anyway. The ability to delete the reference to the file isn't really relevant if you can alter the contents. I never understood why VMS had a separate delete attribute. I just can't see what you gain by having it.

First Post! (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#11203949)

Whats wrong with the first post??

Re:First Post! (0)

Anonymous Coward | more than 9 years ago | (#11203967)

Nothing, you're just not it.

Obvious (0)

Anonymous Coward | more than 9 years ago | (#11203951)

"I would have spelled "creat" with an "e" on the end..."

It's current bone headed owner? (-1, Redundant)

Lead Butthead (321013) | more than 9 years ago | (#11203956)

One name: SCOG

Wrong (0)

Anonymous Coward | more than 9 years ago | (#11204113)

In what way does SCOG own unix? They bought a license to sell a version of it and give 95% of the revenue to Novell. And Novell doesn't own the name.

Having said the above however, you do point to a heretofore problem with unix. There were various vendors and flavors of unix. Unix will become open source and standardized. All unix will become posix compliant and everything will be good. (a guy can dream can't he?)

Program Installation Locations (4, Insightful)

ShortSpecialBus (236232) | more than 9 years ago | (#11203957)

The first thing to change should be how programs get installed.

EVERYTHING right now goes in /usr, without a directory, because everybody is too lazy to have /usr/foo/bin and /usr/foo/lib in their respective environment variables, because it's too much of a "pain" to put them in there on software installation, and it makes library linking more difficult.

Right now, if I want to uninstall a program, I have to remove it from about 10 different places, many of which aren't obvious (/etc, /usr/lib, /usr/bin, /usr/share, et al.) and there's no good way to do it.

Find a way (maybe symlinks /usr/lib/foo.so -> /usr/local/foo/lib/foo.so, maybe something else, I don't care) to make it so program installation/uninstallation makes more sense.

Re:Program Installation Locations (-1, Flamebait)

Anonymous Coward | more than 9 years ago | (#11203989)

That's why you use a package management tool, you moron.

Re:Program Installation Locations (1)

mqRakkis (521550) | more than 9 years ago | (#11203990)

I second that. Program installation locations should be fixed somehow. A program on UNIX is scattered all over the place. I've been using linux for about 3 years and I still don't know where most programs are installed (granted, I never really looked into it that much). Look at windows: you clearly specify the installatino directory, and then *all* the files go there. Then there's an Uninstall that you can use to easily remove the program.

Re:Program Installation Locations (1)

mqRakkis (521550) | more than 9 years ago | (#11204036)

To add to my post; I realize there are package management tools like rpm and apt, but the problem of files-scatter-all-over-the-place still remains. And imho this shouldn't be fixed with some kind of external tool, it should be part of the core OS design itself.

Re:Program Installation Locations (1)

Phleg (523632) | more than 9 years ago | (#11204126)

Why? Why is this a problem? Tell me one thing that this complicates or causes you to be inable to do.

Re:Program Installation Locations (0)

Anonymous Coward | more than 9 years ago | (#11204139)

I have a p: drive where I install programs, separate from the normal c: drive where windows gets installed. I always install applications to the P: drive, but my observation is that some applications will still stick stuff on the c: drive.

So, most stuff gets installed into the installation directory (p:), some stuff gets put into "%windir%\system32" (c:), but sometimes some go into "%ProgramFiles%\common files"!

GNU Stow (2, Informative)

Anonymous Coward | more than 9 years ago | (#11204013)

That is why distributions have package management systems for GNU/Linux. A single command is sufficient to install/remove a package.

However, I understand your problem, when it comes to manual installation. There is a project GNU Stow [gnu.org] to handle what you are talking about.

Re:Program Installation Locations (3, Informative)

Anonymous Coward | more than 9 years ago | (#11204024)

Speak for yourself - for years, I've installed packages in "/usr/local/packages".
Package "foo", version "N" goes in "/usr/local/packages/foo-N".
The current version of "foo" has a symlink to it from "/usr/local/packages/foo".
"/usr/local/bin" contains symlinks to the appropriate files in "/usr/local/packages/*/bin"
Upgrades (and downgrades) are trivial.

Re:Program Installation Locations (1)

Col. Bloodnok (825749) | more than 9 years ago | (#11204058)

That's platform dependent. Many (decent :) UNIX platforms put optional software in the /opt directory...

Re:Program Installation Locations (1)

MarkByers (770551) | more than 9 years ago | (#11204076)

I don't know what distibution you are using, but most distributions have a package manager which handles installing/uninstalling of packages automatically, so you don't have to worry about where the files are. A single command will do the job.

Re:Program Installation Locations (1)

acvh (120205) | more than 9 years ago | (#11204154)

"you don't have to worry about where the files are"

this sounds like a Microsoft answer: and it's not a very good one. why can't installing a program on Linux be like installing a program on OSX? Copy an application bundle to the application directory. Done.

Re:Program Installation Locations (0)

Anonymous Coward | more than 9 years ago | (#11204081)

Wait - you want a special line in the environment for every single program?

Re:Program Installation Locations (2, Interesting)

grumbel (592662) | more than 9 years ago | (#11204106)

I second that, world could be so much easier already if we just had wildchard support in PATH and other environment variables.


or something along the lines could make life much easier.

The throuble is really that the current filehierachie was designed to only contain a basic unix system, (ls, rm, libc, etc.) not a fullblown multimedia desktop, as what most linux are today.
Stuff like the LHS don't even try to fix the mess, they just standardize it. Most likly we will be stuck with the current mess of a file hierachie for long time to come and hardcompiled paths in a bunch of binaries don't make it much easier to get rid of it either.

Re:Program Installation Locations (1)

DarkHelmet (120004) | more than 9 years ago | (#11204115)

Maybe part of the problem is that environment path variables themselves can't be recursive.

Having something like PATH=/usr/* would be useful.

Re:Program Installation Locations (4, Interesting)

plover (150551) | more than 9 years ago | (#11204125)

Are you suggesting that an installation process more like Windows Installers would leave easier-to-clean-up-code? Because if so, I've got this real nice bridge to sell you.

The problem I have with an "installer" system is that immediately developers will extend it to do things it shouldn't be doing. "Hey, you know, when we install this program we should have it send gmail invites to six people, FTP a pretty picture of a llama while we construct suitable advertising panels, and create three new users with the authority to start, stop and pause the data subsystem."

Other than the llama thing, people have done all that crap and more with Windows installation tools. They blindly overwrite shared system files (leading to DLL hell,) they muck up the registry, they install hundreds of class IDs for internal-use-only COM interfaces, plop in unrelated browser helper objects, add random directories to the front of the system path, launch odd services that do god-knows-what, wedge in a startup task or two and then demand you reboot your system.

It's taken Microsoft many years to realize they couldn't control the installers, and so with XP they changed the OS to try to defend itself from renegade installations. It would be extremely sad to see a UNIX equivalent.

configuration (5, Interesting)

meshko (413657) | more than 9 years ago | (#11203968)

I think the biggest problem with Unix is the lack of standardized way of doing certain things, in particular program configuration. Even simple programs that require very simple configuraiton store it in random places and formats. Not to mention things that require some serious config files, like sendmail, apache or X. Creating a cross-platform powerful configuration language would help.

Re:configuration (1)

dotwaffle (610149) | more than 9 years ago | (#11204030)

Actually, not to go against you, or anything, but I find Unix actually fairly compliant, and to-standards, as opposed to Windows applications, where every application does things differently. Or at least, apps from different companies do so. The only exception in the Unix World, I have found is EMACS (god forbid it) and lovely lovely ViM.

In fact the best way to improve Unix would be some improvements to access control (presumable acl helps this). What if I want to give two people access to a file but no-one else? I have to create a _group_ for them. Why isn't there an easier way!!!

Re:configuration (1)

meshko (413657) | more than 9 years ago | (#11204073)

I don't know of a good solution to this problem or I would have mentioned it.
As for ACLs -- I'm not a big fan of them. I think it violates KISS principle.

my answer (3, Funny)

ubiquitin (28396) | more than 9 years ago | (#11203974)

Q. What's wrong with Unix?
A. All those slashes and dots.

Q. How you would fix it?
A. um, slashdot

Of course!

What's wrong? (0)

Anonymous Coward | more than 9 years ago | (#11203978)

It's free.

How to answer the question. (1, Funny)

Tackhead (54550) | more than 9 years ago | (#11203983)

> What insightful answers did the rest of Slashdot give when they applied to work at Google? To repeat the actual question, 'What's broken with Unix? How would you fix it?'

"The fact that I have yet to receive significant monetary compensation for working with it. I would fix it by having someone at Google to hire me with a starting offer of $100,000 per year salary and a signing bonus of options for 50,000 shares of pre-IPO Google stock."

Why are you looking at me like that? I figured all the good jobs were gone, so I was trying for a marketing position.

The MathWorld Answers (2, Interesting)

Ed Pegg (613755) | more than 9 years ago | (#11203984)

Google called me a wimp for not answering the non-mathematical questions. At MathWorld News [wolfram.com],you can see how Eric and I answered all the other questions.

In a word... (3, Insightful)

rongage (237813) | more than 9 years ago | (#11203986)

Printing - more specifically, Postscript Printing.

This sillyness of having to generate postscript so Ghostscript can generate PCL so you can print is just wrong - empty brained, someone forgot to wake up wrong.

PCL is available on every major printer on the market today - it IS the standard. PostScript is a has-been. Dump it today.

That is what is wrong with *nix and what I would do to fix it is require all software to support PCL printing directly.

Re:In a word... (2, Insightful)

Anonymous Coward | more than 9 years ago | (#11204116)

Every modern systems has a native printing language, and each uses the native language to abstract multiple end-languages.

Postscript is an intelligent way to abstract non-Postscript printing. Postscript is well documented, and is in itself a useful print language.

Otherwise we'd be back in MS-DOS. Do you remember when each application had its own audio, video, and print drivers?


Re:In a word... (1)

hey (83763) | more than 9 years ago | (#11204128)

I know this is off topic but I was looking to buy a printer the other day and went to tons of websites to find one with Postscript since I know that works best with Linux/Unix. It was hard to find a cheap (under $1000) printer with Postscript. Suggestings?

needs stdctl & no ioctl (0)

Anonymous Coward | more than 9 years ago | (#11203991)

Needs a standard control stream and no ioctl.

Does it reliably enable true modern computing? (4, Insightful)

Ars-Fartsica (166957) | more than 9 years ago | (#11203994)

Does unix enable people to build clusters, serve multimedia content, create sustainable high-throughput networks etc etc? Yes. Most implementations also provide for these true modern computing environments reliably and cheaply. What else do you want an OS to do? If an OS can reliably enable the modern application layer, to me it has satisfied the criteria of an OS.

While I agree that the core OS has not moved much in decades, I also see very little motivation for this as much of the required functionality has moved up the stack to the application layer.

Plan9 is what's right with UNIX (5, Informative)

andrewzx1 (832134) | more than 9 years ago | (#11203996)

If you read the motivations behind writing Plan9 (documented on slashdot previously), there are many descriptions of what the authors thought was wrong with UNIX. And the guys who wrote Plan9 are the same guys who wrote the better part of UNIX. And for you youngsters, UNIX is not LINUX. - AndrewZ

Re:Plan9 is what's right with UNIX (1)

nocomment (239368) | more than 9 years ago | (#11204103)

plan 9 is nice, and to give you an idea of the power and extensibility of it, go play with Inferno [vitanuova.com].

cynical view (5, Insightful)

Keitopsis (766128) | more than 9 years ago | (#11204003)

Unix is great!, unless:
- You just want a plug and pray answer
- You just want a word processor
- You just want ......

If someone is only looking for a single application, it is hard to shove such a versitile system down their throat.

Create a truely modular UNIX/OS that does not depend on any single environment(init/SYSV). Make a pluggable API-level interface that you can plug anything from a single application to a complete system environment into. Then get someone to develop EXACTLY what you want.

Idiotware without the bloat.

Laughing all the way,

-- Kei

Re:cynical view (2, Interesting)

pclminion (145572) | more than 9 years ago | (#11204091)

If someone is only looking for a single application, it is hard to shove such a versitile system down their throat.

And yet Linux is becoming an increasingly common choice for all sorts of embedded, special-purpose devices.

A lot of people don't really understand what UNIX is. At its heart, it is just a philosophy, not a system. A way of thinking about and solving problems which has remained relevant and useful for decades. All real-world UNIX systems have lots of crap bolted on, out of necessity, but the inherent "UNIX-ness" of the system emanates from the design philosophy, not the implementation or application.

UNIX is a How, not a What.

Has to be said (3, Insightful)

aendeuryu (844048) | more than 9 years ago | (#11204004)

One big thing that's wrong with Unix is SCO.

Re:Has to be said (-1, Troll)

Anonymous Coward | more than 9 years ago | (#11204069)

Oh cool! This guy said what he's supposed to say: I think I should immediately assume that he's really cool. Mod him up everyone!


Anonymous Coward | more than 9 years ago | (#11204007)

C with it's standard null-terminated strings. Bad news all around.

Re:MOD SELF UP!! (0)

Anonymous Coward | more than 9 years ago | (#11204063)

Shit! I used it's when I should have used its. Don't say anything, I have already flayed my self within an inch of my life for this mistake.

What's wrong? (1)

Libor Vanek (248963) | more than 9 years ago | (#11204009)

I really like the idea from the article - what's wrong with UNIX is that it's great stand-alone system but once you connect lot of UNIX machines, it's hard to get "all-over-network" filesystem/accounts/desktop enviroment/SW distribution & update/etc. - I know that there are solution for every single thing of this BUT it's always "non-standard" (compared to "stand-alone" UNIX machine) and not-well cooperating solutions...

Re:What's wrong? (1)

pclminion (145572) | more than 9 years ago | (#11204032)

I really like the idea from the article - what's wrong with UNIX is that it's great stand-alone system but once you connect lot of UNIX machines, it's hard to get "all-over-network" filesystem/accounts/desktop enviroment/SW distribution & update/etc.

Isn't that a good thing? Super-homogeneous systems with lots of communication channels are easy for worms and other malicious programs to spread through. The heterogeneity of UNIX is what has kept it relatively free of such exploits for so many years.

Easy! (5, Insightful)

Telastyn (206146) | more than 9 years ago | (#11204023)

Lack of coherent newbie documentation.

Sure, man pages exist, but even once you learn that man does what help really should the man pages are generally written by programmers for programmers.

Newbie guides generally don't get any further than a small command summary, which doesn't really show any strengths of unix over using a gui [or windows!]

The best thing I think would be to provide more "whole system" examples/help rather than help for each individual command. Take some nice simple topics [how to add many users, how to determine network utilization programatically, how to determine open ports and what process is using them...] which are painful to do on windows and use a variety of unix tools to solve them.

Help (0)

Anonymous Coward | more than 9 years ago | (#11204127)

The help system is one thing Windows XP does better than Linux. Now if I could just get my wife to click on Start/Help instead of calling me up at work and asking me how to use the computer, we'd be all set...

Unix is too powerful (3, Informative)

Anonymous Coward | more than 9 years ago | (#11204029)

I know it sounds silly, but it's like asking a consumer to operate a bradley armoured figting vehicle, it wasn't built for consumer use, its got hundreds of knobs and options and configurations, and if you don't get it set up right the first time it is a tremendous headache to fix it. Consumers want a gas pedal and a brake, windshield wipers are fine, but when you put on a .50cal machine gun mount, even if its "turned off", it scares people away.

It's a canonical example of something that tries to be everything to everybody, but ends up being too hard for anyone to use.

The I/O Model (1)

BKCat (844270) | more than 9 years ago | (#11204039)

Jeez, where do I start?

The basic Unix I/O model is that of byte streams flowing around the system, and in and out of devices. Unfortunately, this model really only applies to tape drives; most devices have to be addressed first (such as a disk block number), and then read or written in discrete chunks (blocks, packets).

Pseudo-kernel Applications (1)

logicnazi (169418) | more than 9 years ago | (#11204042)

In the core system what I find most lacking is better support for providing psuedo-kernel type services. For instance windowing systems/3d accelerators end up being interfaced with a bunch of hacks and make use of little to no unified kernel support for this sort of thing.

I've also never been impresed by the IPC facilities in unix, it seems there are a bunch of 'okay' solutions cluttering up the kernel but no really good/modern IPC solution. It would be nice if we could get one protocol with a bunch of options or a few good protocals aimed at certain situations (shared memory models, message passing etc..).

I'm sure there are a few more things that aren't the best deep down in the kernel architecture. For instance many not-super-critical sections of code in linux use bad/stall prone algorithm. However, this isn't a problem in all unices. But other than the above issue the only real disadvantage to the users is lack of software and that isn't really a unix problem.

uniform filesystem perhaps? (1)

thanasakis (225405) | more than 9 years ago | (#11204045)

I was taking a look at plan 9 [bell-labs.com] and I was impressed by the feature that allows every single process to have a unique perspective of the file system. For example, if a process wants to draw in its window, there is a special file in /dev (I think) that maps on it own window and so on. Features like this are implemented via special handling in normal Unix and are probably rare.

These ideas could perhaps extend the philosophy of "everything is a file" and at the same time improve security.

Too late! (0)

Anonymous Coward | more than 9 years ago | (#11204052)

I've been using Unix so long, it seems "normal" to me. I only recognise some of the more bizarre Unixisms, when I answer Unix newbie questions.

Hmm... (pondering a bit)... Oh, yeah. Get rid of Unix signals. They are an archaic throwback to when unix had no concept of threading and needed some sort of asynchronous mechanism. But they don't work very well and are lossey. They fixed that with Posix condition variables. If you get rid of Unix signals you will increase the reliability of Unix applications by at least an order of magnitude based on my experience. That's one thing Windows got right. They don't have asynchronous signals.

Nothing (1)

bigberk (547360) | more than 9 years ago | (#11204057)

Seriously, it ain't broke. Been working great for a long, long time. What will make UNIX, *BSD, Linux collectively strong is sticking to the same UNIX plan (as has been the goal with Linux from the start).

simplisity (0)

Anonymous Coward | more than 9 years ago | (#11204059)

It is not easy to use and has no standars. Whenever i sit down at unix/linux/bsd it has ia differtn shell, editor, install place, package system and so on. Decide on a standard and YES PLEASE let theire be a million options but lets use one way on all places.

Hmmm how could unix be improved (0)

Anonymous Coward | more than 9 years ago | (#11204086)

make it challenging, soft and cuddly, with lots of fire power!!

The C language (5, Insightful)

lazy_arabica (750133) | more than 9 years ago | (#11204088)

Yeah, I know that most *nix lover simply love it. But let's face it : this language, which is still the most important one in a unix environment, is really aging. It is possible to develop big software in pure C, but it takes much, much time, and the risk of introducing bugs and security flows is huge. Only the minimal low-level core of the system should be based on C ; the rest should be developed in a modern, high-level language.

Whats broken with unix? (2, Informative)

mgv (198488) | more than 9 years ago | (#11204095)

Its hard to pinpoint anything specific that is broken with unix as a whole.

But there are lots of subsystems that aren't exactly perfect.

Examples that come to mind:
*File permissions only go to user/group/others rather than individuals, and poor record locking on network shares. Lack of automounting as an intrinsic feature of the operating system.

*Windowing subsystems that network, but cant handle 3d networked graphics effectively, or support the more advanced hardware features of graphics chips locally particulaly well.

*Software packaging systems that develop conflicts. (Probably more of a linux problem, actually)

- I am aware that all of these have workarounds or are being worked on -

The kernel of most unix's (and, for that matter linux) are fairly well tuned to a variety of things, although they are subject to a number of internal revisions to try and do better multi tasking & multiple processor scaling, for example.

Where these systems will probably fail the most is when the underlying hardware changes alot - for example handling larger memory spaces and file systems, or perhaps even moving to whole new processes (eg., code morphing cpu's such as transmeta's, asynchronous cpu's). These designs are quite radically different and we have developed down a specific cpu/memory/harddrive model so far that its quite difficult to look at major changes, as they aren't as easily supported by the operating systems.

Just my 2c, and from a fairly casual observer status - it would be interesting to hear what the main developers think on all of this.


mmap (2, Interesting)

blaster (24183) | more than 9 years ago | (#11204098)

I know that a lot of people think it is a great thing, but it is really problematic. It makes great sense on systems that have fixed disks, but once you start having transient filesystems (network filesystems or removable drives) it becomes a real problem. If a filesystem is removed then programs may segfault since the mapping disappears. All other entry points for this sort of thing fail at a system call (read or write) which allows for graceful recovery. Conceivably the OS could inform the user or insert a zero filled backing, but that could lead to data corruption.

This is a particularly bad problem for desktop systems, where the users are not experts. For server or cluster systems it is not an issue.

User Friendly (2, Insightful)

DaFallus (805248) | more than 9 years ago | (#11204099)

That is part of the problem right there. All of you are talking about a lot of complex issues that the common user knows absolutely nothing about, and no one has mentioned this. How about making it intuitive and simple enough so that my grandmother could use it. Maybe then you'd see more people using it than Windows...

Simple... (3, Funny)

andreMA (643885) | more than 9 years ago | (#11204114)

SOLUTION: 2 MT airburst over Lindon, UT

Oh, with UNIX, not for UNIX. Never mind.

As you were.

Can Never Surf the Pervasive Wave (1)

LordMyren (15499) | more than 9 years ago | (#11204148)

theres no groundwork to allow Unix to surf the pervasive wave. instead there's only huge ancient walls which people have been waving their hands at for decades. the wall hasnt noticed. unix is a perfect multi-user base, except one flaw.

X needs a way to allow applications (wm's) to discern the source of input. Built off this core, window managers or X itself can hack solutions to craft "multiple cursors". (Ultimately the window manager is going to have to get deeply involved with this policy; I think the best solution is to simlpy expose the source of input and have the window manager hold state for each input-source and manipulate the single corepointer, "faking" multiple pointers. this prevents X from having to deal with drawing&policy (which is the wm's domain)). Currently the best interaction you can get is to mux together your inputs into one keyboard/mouse pair.

The multiple pointers [freedesktop.org] in X problem has been around for a while, but no ones been able to crack it. X was simply never meant to have more than one cursor, never meant to assuming anything past a single keyboard and mouse.

The one plight of this solution is that every X input event would have to pass through the window manager, which I imagine is expensive.

synergy [slashdot.org] allows you to share input devices between computers (even cross platform) but the next logical step is missing.

All we need is some way to see the physical source of each input event.

There's no bigger achilles heel.

1. See unix hater's handbook (0)

Anonymous Coward | more than 9 years ago | (#11204149)

Note that it contrasts unix to high-end systems of the day (i.e. Lisp Machines). Windows is even worse than unix, hence it did even better (by some metrics).

Top of my list: hierarchical filesystem. The Relational Revolution happened, DEAL WITH IT. Hierarchical databases are appalling for data organisation. They should be available as a speed optimisation, but the core filesystem of a next-gen system should be post-hierarchical. My Lords of Acid mpgs should be in "hot" and "dance"... AND I should be able to "ls -x" and AT LEAST SEE that a particular mpg is in BOTH "hot" and "dance".

Most of the problems with application packaging come from the fact we have an assinine hierarchical system on unix - any reasonable system would have application binaries in /bin and in /appdir , with a categoric, relational or at least freaking set-theoretic file system.

Break FREE of hierarchical thinking.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account