Here is the script:

A comparison of MS-Windows® and Linux® security

Table of Contents

 
  1. A comparison of MS-Windows® and Linux® security
  2. Table of Contents
  3. Introduction
    1. Technical comparison of security features
      1. Definitions
    2. Cultural comparison of security features
  4. Amplifications of the discussions
    1. Configuration and State
      1. The registry
      2. By way of  constrast...
    2. Should Software be Written For Profit?
    3. Tight integration of kernel and application
      1. Security and reliability
      2. Portability
      3. Performance
      4. Ease-of-use
      5. Ease of coding
    4. Distributed systems:  RPC, NFS, NIS, Kerberos, AFS, and NetBIOS
      1. Identification, Authentication, Authorization, and service in the real world and the UNIX world
      2. NetBIOS
        1. The Microsoft UNIX services for MS-Windows
    5. Money and software quality
    6. Malware and popular culture
    7. DLL Hell
  5. Conclussion

Introduction

For a long time, I have been carrying on a dialogue with my fellow system administrators about the relative security of UNIX (which includes linux) versus MS-Windows.  There can be no argument that the impact to society from insecure software is vastly greater in the MS-Windows world than in the UNIX world.  But is this a consequence of the fact that MS-Windows runs on about 80% of the world's desktop computers (as Microsoft claims) or is this a consequence of intrinsic design features in UNIX and MS-Windows (As the Linuxians claim)?  The MS-Windows sysadmins claim that Windows is just as secure as UNIX is, and the only source of concern is that MS-Windows has such an overwhelming majority of the market share.  UNIX sysadmins claim that UNIX is intrinsically better designed than MS-Windows is.  In this essay, I am going to explore both the technical details of the operating systems and also the surrouding culture of the operating systems.

The first assertion, that MS-Windows has majority market share, is actually not true in many markets.  But I am going to refer the interested user to The Netcraft Web Server Survey for more details about that.  However, the following diagram and commentary are telling:

UNIX vs. Windows security

This is from http://www.osdata.com

Microsoft has 26% of the Web server software market share 60% of defaced Web sites run Microsoft Web server software
Market share as of January 2002 Defacements = about 30,000 between April 2000 and February 2002
Microsoft software runs about a quarter of Web servers, but is the target of the majority of successful Web defacement attacks. —Los Angeles Times, February 13, 2002

This essay is going to explore the latter assertion, the UNIX is intrinsically more secure, in detail.   Table 1 has a sumnmary of the technical security features and table 2 has a summary of the cultural security features relating to security in the two operating systems.  The entries in the table are keyed to sections in this document which explain and expand on the table.

Ultimately, this essay is an exercise in futility.  People buy MS-Windows because it runs the software they want to use.  The software they want to use runs under MS-Windows because it is the most popular operating system.  These facts mean that Microsoft has a de-facto self-perpetuating monopoly.  As the man said, "Nothing succeeds like Success".  Furthermore, Microsoft has about $50 billion in cash and equities under its control so it can (and has) bought competitors and litigants. 

Pointer to the table of contents.

Malware and popular culture

In the movie "Terminator: the rise of the machines", the pumps at a gas station become inoperative because of a computer virus.  It is not possible for gas pumps to get a virus, because they have no connection to a link through which new software can be downloaded.  And yet, the audience completely accepts this story.  I find it very disturbing that people in this country have become accustomed to the idea that computerized systems are so subject to malware.  I also find it disturbing that they are willing to trust computerized systems with their lives at the same time they are so accustumed.  Clearly, most people do not think about computers the way I do.

Perhaps this is a good thing: after all, I really don't want to think about my toaster, I just want my toaster to make toast.  Similarly, the goal of computers is not that people should think about computers but rather that people should use computers to solve real world problems.  The Microsoft solution, with integrated applications and operating system, solves that real-world problem.  The problem is that the solution is insecure.  Microsoft "grew up" with PCs, and a PC is a personal computer.  You shouldn't have to worry about the security of a PC, just as you don't have to worry about the security of the papers on top of your desk.

But I would argue that this disconnect in people's thinking is a bad thing. In George Orwell's book 1984, there is a discussion about going to war. One of the arguements was that "We are such a small nation that surely we pose no threat to them", and yet the next argument was "We are such a powerful nation that we can defeat them easily". These two arguments are disconnected. So, in the real world, people assume that any computer system can get a virus, which is false; and yet they are willing to use untrustworthy operating systems to conduct business, which is foolish. I believe that this disconnect comes about because nobody talks about security in real terms. The Bush administration talks about the need to enhance our nations security because of the 9/11 disaster, and yet the number of Americans who die each year due to gunshot wounds is an order of magnitude larger than the number of people who died in 9/11. Similarly, the number of people who died in automobile accidents swamps the 9/11 disaster by an order of magnitude, and half of those fatalities were alcohol related. These facts are generally known, and yet rarely spoken about, because discussions about security have been coopted for economic and political gain.

Your computer is your computer, and you ought to be the one who controls what goes on in that machine.  Anybody who tries to control what you do with your computer is not acting in your best interest - or at least, the burden is on them to show why you should let them.  This is true for viruses, worms, but it is also true for pop-up windows, SPAM, and Digital Rights Management.

Technical comparison of security features

The "advantage" column refers to which design is more secure.  It is not necessarily easier to set up initialy.

Feature
MS practice
UNIX practice
advantage
Discussion
Kernel design
Windows/9x-16 bit monolithic kernel with multitasking laid on top
There are monolithic kernels (e.g. Linux, MS-DOS) and Microkernels (e.g. Mach)
Toss-up
Microkernels have a certain appeal to computer science types because they neatly partition the tasks of a kernel.  The problem is they they tend to have poor performance because of the overhead of passing messages from component to component.  It is not clear if microkernels are more or less secure than monolithic kernels.
MS-DOS does not have virtual memory; Windows/9x does; this a bizarre design where an application adds functionality to the operating system.  But it works tolerably well.
Windows/NT -32 bit Microkernel
Kernel configuration
Stored in a couple of files (the registry) which are flat databases.
Stored in many text files
UNIX
Having all of the configuration information in one file makes it easier for programs to manipulate the configuration.  Viruses and malware
Starting server processes
table driven
Scripts and symlinks
UNIX
The vendor supplied scripts rarely need modification but  if extra functionality is desired, then it is easy to add it to a script.
Kernel in shared libraries
The kernel makes extensive use of shared libraries.  Please refer to DLL Hell.
The kernel does not use shared libraries (Linux modules are not shared libraries)
UNIX
Application installation can break the operating system.  There are anecdotes that sometimes installing an application on a server will supply a .DLL that fixes a problem with the operating system.  Separation between OS and applications is poor, which is why you have to reboot after installing an app under MS-Windows.
Web server privilege
IIS runs in the context of the administrator account
httpd runs in the context of a normal user account, but it does need privs long enough to grab port 80
UNIX
If a bad guy does manage to break the web server security, then the windows implementation gives them access to the entire machine.  However, if a bad guy breaks httpd, all they have one account.
Distributed network file system
NetBIOS (which was actually developed by IBM) which includes name service, authentication, authorization, and file service
NFS, AFS, rsync, kerberos,
UNIX
NFS has essentially no security.  However, that is widely known and discussed, and everybody knows that you should not use NFS for security sensitive applications.
Kerberos isn't really a distributed file system, rather it is a distributed authentication system.  See below for more details.
I don't know enough about AFS to rate its security, so I am inviting user comments.
Rsync isn't really a distributed file system either: rather it is a tool for keeping two file systems in sync.  Since it can use ssh, it can be arbitrarily secure.  Because rsync can be run in a batch mode periodically, it is not so demanding in terms of reliability.  Again, the system designer has to decide if the possibility of accessing stale data is acceptable.
Microsoft lumped Authentication, Authorization and data transfer into a single protocol, and they have had no end of problems as a result.  NetBIOS does not encrypt packets by default, so it is just as secure or insecure as NFS.

Definitions

Kernel
The part of the operating system that is responsible controlling access to hardware (including memory management), for protecting processes from one another, and for security functions. 

Malware
Viruses, worms, trojan horses and other software which has evil intent.  The differences between viruses, worms, etc. reflect how the malware gets into your computer, the common theme is that this software does what somebody else wants, to your detriment.  Under this definition, software distributed by the vendor that takes away functionality (e.g. Digital Rights Management) could be construed as malware.  So be it.
MS-Windows
For purposes of this discussion, "Windows/9x" refers to Windows/95, Windows/98, Windows/98 SE2, and Windows/ME.  "Windows/NT" refers to Windows/NT 3.51, Windows/NT 4.0, Windows/2000, and Windows/XP.  Windows/2003 probably is part of the Windows/NT set, but I don't have any experience with it.
Open Source
Open source software is software for which the source code in generally available.  Note that "open source" software need not be free: Open source software could be proprietary and released only with a non-disclosure agreement, or available at extra cost.  Note that while a lot of UNIX software is open source, not all of it is. 
scripts
Small programs written an interpreted language that invoke other programs.  Examples include bash, bourne shell, windows scripting host and perl.



Pointer to the table of contents.

Cultural comparison of security features

Computing security is more than just technology, it's also attitude.  It begins with the acknowlegement that security is an important issue.  Security also has to permeate the business processes used to create an use software.

Concept
Microsoft
Open Source
Advantage
Discussion
Software for profit
Profit is good
Motivated by other considerations: pride, curiosity, solve a problem
toss-up
The profit motive tends to get in the way of acknowleging problems and fixing them quickly; the perception is that buggy software damages the vendor reputation; certainly time spent fixing problems could be spent creating new features that sell more copies.
However, profit also motivates people to fix problems whether they like it or not because that's what they are paid to do.
Source code is available
No way (unless you are a powerful government)
Of course!
Open source

There are about 14,000 people who work for Microsoft - however, most of them do not look at the software.  By way of contrast, anybody who wants to can look at open source souce code and apply whatever tests they wish.  If there is a problem and they are sufficiently motivated, they can fix it themselves.

See the discussion on open source

The same vendor writes the OS and the applications
Most of the profit is in the applications, we only do the kernel to keep control over the system
Who has time to work on both kernel and applications and still hold down a day job?
Open source

The kernel writers can focus on writing a really solid kernel, while the application writers can virtualize the kernel interface and write software that will run on any kernel with only recompiling and relinking.

One of the consequences of this failure to separate is that when instllation MS applications, one frequently has to reboot; but when one installs non-MS applications, one seldom must reboot.

MS argues that tight integration creates higher performance applications - the open source community believes that clean, simple designs lead to higher performance.

See the discussion on tight integration.

Updates (both hardware and software)
You will update when (and because) we say so Update at your discretion.  If you want to run this high powered application on a '486, be our guest
open source
 Microsoft derives revenue from upgrading - in fact, they probably derive more profit from upgrading than they do from selling software on new machines.  Microsoft could, if they chose to, implement file formats in such a way that they were upwards compatible.  By way of contrast, HTML is upwards compatible - if a browser doesn't implement a tag, it ignores it.
Marketing
Let's spend lots of money on advertising
What money?
Open source

The media outlets that accept Microsoft advertising money are loath to go into a detailed investigation of the company or its products.  Every journalist studies this issue in Journalism school, every journalist knows about this issue, and many journalists acknowlege that this is a serious issue. 

On the other hand, this argument really is an ad hominem argument: it assumes that the press disagrees with me because I assume that the advertising sales people talk to the editorial staff.  I can't prove it.

Features
Features are good.  Lots of features!  Put 'em in the kernel, whether or not that makes sense or not!
A clean, simple system which is the basis for whatever the user wants build.
Open Source
Remember that this paper is about computing security.  Small, simple systems
History
Windows developed from a single-user computer system; system security was physical security
UNIX was always a multiuser, multitasking operating system; system security was part of the original requirements of the design, and not a later add-on
Open source
It is axiomatic that security has to be designed in and cannot be added on later.   In general, adding security to existing systems winds up having a "kludgy" feeling.  What typically happens is that security features are added later on, and the maintenance programmers give it their best shot with limited time and understand (program maintenance is always risky).  As more and more security problems are uncovered, kludges are added to kludges.  Eventually, the system becomes unstable, and must be scrapped.
Our experience over centuries of dealing with security issues is that the key to effective security is simplicity.  Make security complicated, and people will tend to bypass it.



Pointer to the table of contents.

Amplifications of the discussions

Configuration and State

One of the remarkable aspects of computer programming is the tradeoff between procedural code and configuration, or state, information.   I was once given the task of implementing a very complicated specification.  The programmer who had worked on it before me had created this incredible convoluted chain of if...then...elses and it was a bloody mess.  We completely revised the program to a rather large state table and a set of inputs - at each iteration, given a current state and an input, go to the next state and generate an output.   We also wrote a tiny table interpreter.  When it turned out (surprise!) that the spec was wrong, it was easy to fix because we simply changed the table.  This incident gave me a profound respect for the art and science of storing state information, and the need to do it well.

The registry

The registry is a storage place for the configuration of an MS-Windows machine.  It is implemented as a flat file, although the user interface would suggest a tree orientation.  The registry not only contains information on the applications, but it also has state information for the operating system, including the kernel.

The problem with the registry is that ordinary users and ordinary user software has to access it, so either there must be special access controls on individual keys or else there is risk that an evil or erroneous application can modify a system critical key.  Every time you come across a procedure in the literature for accomplishing something by modifying the registry, there is always a warning to be careful lest the computer be left in an unbootable state.

In Windows/XP, Microsoft has gone to a great deal of difficulty to alleviate this problem.  However, they did not solved it, because Windows Server 2003 claims that they have solved it.  But there is still a registry in Windows server 2003, so it is still problematical.


By way of  constrast...

There is no data structure in the UNIX system, which, if you screw it up, will leave the system unbootable (with one exception: the partition table.  However, the partition table is read by the ROM BIOS, not by the OS, so screwing up the partition table will screw up all of the operating system on a machine.  In general, nobody messes with the partition table). UNIX typically stores its state and configuration informatin in two places: a directory called /etc and in hidden files in a users home directory.   These "hidden" directories are not really hidden but rather are hard to find simply because most of the time, you don't need to mess with them. But if something unanticipated happens (what that would be, I cannot anticipate), you can go into those directories and fix the problem.  The UNIX software will read the system-wide configuration information and then read the per user configuration so that the user has final authority over how the software works.


Pointer to the table of contents.

Should Software be Written For Profit?

Originally, (circa 1958-1975), a lot of software was written to sell hardware. The vendors knew that nobody would buy their hardware unless it had some software to do something, so every vendor created their own operating system and included some compilers.

However, there was an nascent open source community. I remember in 1978, getting a tape with the source code for the DECsystem-10 Pascal compiler, for free from the University of Texas. I had to find another machine that had a Pascal compiler to compile the Pascal compiler, so I found a DECsystem-20 that had a compiler but no source code: I had the source code but no compiler and that was good enough. I compiled the compiler, brough it back to my DEC-10, installed it, then recompiled it by way of testing it, and it all worked. Although I had not heard of it at the time, UNIX was making inroads into the vendor specific world, using the same sort of back door cooperation between the geeks in the trenches.

Today, of course, everybody has heard of "open source" software. Hundreds of thousands of geeks all over the world are creating systems that are more reliable and less expensive than their commercial counterparts. What drives these people to do this creative work, and is it a viable business strategy for the long term?

Any psychologist (and if any psychologist reads these words, I'll be amazed) will tell you that humans are motivated by many things, including breathing, drinking, eating, shelter, and emotional support. Once humans have the basic needs of food, drink, and shelter, then they have time to seek out and acquire emotional needs. Writing software fulfills an emotional need, either to slake an anger that something doesn't work, or for pride, or sometimes as a learning experience. So long as computer people are reasonably well paid, the open source model can tap into those emotional needs to get the human capital it needs.

However, the question of long term viability is a tough question to answer. Microsoft is viable virtually forever because it has billions of dollars in the bank. Insofar as I know, no other high technology company can make that kind of claim, which suggests that no other high technology company is viable virtually forever. Amazon.com, for example, it pro forma profitable, but that means that they simply pick and choose what they wish to consider on their profit and loss statement. Is Amazon.com viable in the long term? Nobody really knows. So is the open source model viable? Nobody really knows that either.

Tight integration of kernel and application

Is tightly integrating the kernel and the applications a good thing or a bad thing? In the MS-Windows world, there are three ways that the kernel and applications can interact: through system service calls, through the registry and through shared libraries (.DLLs). In the UNIX world, the kernel and applications can interact only through system services. In theory, an application could be written that modifies the kernel configuration, but in practice, the kernel configuration is modified through simple text editors such as vi.

The Linux kernel does not use sharable libraries, because the kernel developers (wisely) believed that sharable libraries would compromise the reliability of the kernel. There are subroutines in the kernel, obviously, but they are not sharable outside of the kernel. So the only way that an application can interact with the kernel is through system service calls. These system service calls are tightly controlled, so that the worst that a process can do is destroy itself, but processes cannot harm other processes. The operating system is scrupulously careful that the only way processes may interact with one another is via system service calls, and the only way processes may interact with the hardware is via system service calls.

Security and reliability

Clearly, making the registry modifiable by a computer program means that the software modifying the configuration can do checks that the configuration is valid. One of the problems in the UNIX world is that you frequently try something and if that doesn't work, you try again. There are comments in the text files, to be sure, but it is hard to check things. Some recent applications, such as samba, include a configuration check program (In  samba, this is smbcheck ), which helps.  By way of contrast, the Windows world has "wizards" which are programs that modify the configuration for you.  So long as your configuration needs are known by the wizard, this model works well.  However, if something unanticipated happens, the wizard may be as helpless as any computer program will be when presented with input it does not understand.

If an untrustworthy program modifies the registry, or if the software has a bug in it, then it is difficult for a human to repair the problem. Also, sometimes there are registry settings which are not be set by software. The classic example is the bit that decides if NetBIOS passwords will be sent encrypted or in clear: you must use the registry editor or similar software to set and clear the bit. In particular, malware frequently changes the registry, and it is hard to figure out what has changed.

Another consequence of the registry is that modifying the registry can leave the machine in an unbootable state. MS-Windows is so large, that it won't fit on a floppy, so the recovery procedure is very involved. Also, the operating system is undisciplined in its layout of files, so there is no way to build a bootable CD-ROM that does anything useful. By way of contrast, Linux is small enough to fit on a floppy (or at most, two floppies). Furthermore, it is possible to build a linux bootable CD-ROM which reads/writes to RAM disks (semantically a file, but stored in RAM). So even if a user were to somehow make their linux machine unbootable, repair is straightforward.

Portability

Microsoft claims that they are "the" "industry standard" operating system.  This claim is patently false.  There are industry standards for operating systems, most notably POSIX (See also this site in Denmark)(which, ironically, Windows/NT and later are compliant with), but the fact is, there is no industry standard operating system and it my fondest hope that there never will be.  Why?  Because computers are too powerful and too general for an "industry standard" operating system to work.   By way of analogy, note that there is no such thing as a standard car.  There are some applications where an operating system must fit in a fixed amount of ROM and every byte is dear.  There are some applications which run on highly networked 64 bit machines with massive parallel processing.  Sun is still making SPARCs,  DEC, no, Compaq, no, HP,  no Alpha Processor (see the history of the DEC Alpha) is still making DEC alphas.  The fact is, there are a lot of
CPUs out there beside the ones from Intel.
Linux will run on any of them: all that is required is a new "back end" for gcc (the GNU C Compiler) and a little patience.
MS-Windows, however, will only run on Intel CPUs and manufacturers who make chips that are compatible (AMD, Transmeta)

Is there a relationship between portability and security?  I think there is.  In order to write portable code, you have to have a disciplined development process.  That disciplined process results in better security.  For example, the standard way to allocate a heap of virtual memory in "C" is through the malloc function and its variants, and the standard way to free a heap of virtual memory through the free function (perl, java and python programmers probably haven't a clue what I am talking about).  The malloc and the free functions are notoriously inefficient and unreliable and every programmer with an ego rewrites them to work "better".  The problem is that these rewrites are generally not portable.  If you want to write portable code, then you gotta be disciplined about  malloc

and
free.

Performance

A lot of linux sysadmins and software engineers believe that Linux offers superior performance than MS-Windows.  I happen to be one of them.  These days, computers are so powerful, that this consideration is utterly irrelevant.


Ease-of-use

A lot of people claim that Linux is hard to use.  However, my 2 year old grandson The author's grandson working with TWM on a Dreamcast  is quite capable of working with linux.  We're not sure exactly what he is doing with it, but he spends tens of minutes opening and closing windows, viewing files, and pounding away at the keyboard.  Given the attention span of a 2 year old child (note the bottle close at hand), this is remarkable.

But seriously, ease-of-use is a security issue, because if the security software is not easy to use, then it won't be.  A lot of people claim that the GUI is an easier to use interface than the command line.  It is not intuitively obvious that this is so.  The problem with the GUI is that you have to navigate through the menus and screens to get to the setting you want to get at.  That's not always intuitively obvious.  The descriptions of how to drill through the menus and screens are tedious or else very large because they have graphics.  And of course, a GUI is very difficult to script or automate.  The MS-Windows GUI is challenging to use when the computer is remote.

Finally, an easy-to-use graphical user interface tends to lull relatively poorly trained people into thinking that they have secured their machine properly.  Security is a battle of wits between the people with trying to protect the  computers and the people trying to break into the computers.  Pandering to the poorly trained strikes me as a recipe for disaster.  Instead, the vendors should publish cookbook scripts which show how to protect the computers, and also how to test their computers to make sure that they are safe.

Ease of coding


Distributed systems:  RPC, NFS, NIS, Kerberos, AFS, and NetBIOS



Identification, Authentication, Authorization, and service in the real world and the UNIX world

Whenever we want to engage in a transaction, either in the real world or in the cyber world, there are 4 things that have to happen.  Unfortunately, we don't think about these steps, even though we do them:

  1. Identification: each party has to identify who the other party is, uniquely.  For example, my social security number identifies me uniquely because nobody else has my social security number. But just because somebody has my social security number does not mean that they are me, nor does that mean they are authorized to take action on my behalf.
  2. Authentication: Each party has to verify that the other party really is who they say they are.  For example, my social security number does *not* authenticate me, because my social security number is fairly easy to come by.  I have written it on any number of forms, as have we all.  There is a famous scam where a guy has a business card that says "Senator John Doe", and he hands this business card to airline ticket agents and gets free first class upgrades on numerous occaisions (this story is probably, hopefully, older than 9/11). This is an example of people assuming that the information on a business card is authentic.
  3. Authorization: Each party has to check that the other party is authorized to engage in this transaction.  The most famous example is a kid who is stopped by a cop while driving his/her parent's car.  One of the questions the cop is going to ask is "Do your parents know that you are driving their car"?   The problem is that the cop does not know, in fact, cannot know, if the kid is authorized or not (wise officers will contact the parents directly).  Authorization can be positive or negative.  An example of negative authorization is the prohibition on voting by felons.  How do the poll workers know that the person in front of them is not a felon?
  4. Perform the service.  This could be several steps, and it has security issues all its own.  For example, if there is money involved, the payer would like verification that the money was received by the payee; the payee in turn would like verification that he or she has in fact been paid or will be paid.

In the 1980s, Sun Microsystems invented a protocol, called remote procedure call (RPC) for dealing with distributed systems.  Then, other players invented protocols that ran on top of RPC (or alongside RPC, in the case of Kerberos) to provide the additional functionality needed.  RPC was a clever solution to a problem: every computer on the network had a different information architecture, and RPC provided a universal translator between the different architectures and a network canonical format.  It was a brilliant solution (it is still in heavy use) but it was the wrong solution.  A better solution was to translate everything in to ASCII strings (or Unicode strings) and send strings across the network.  The Java language solution (Java was also invented by Sun) was to create a universal virtual machine so that every (virtual) machine had the same archictecture.  No translation required.  Sun also invented a network lookup service, called Network Information Service or NIS (formerly known as yp for yellow pages, which is what it  resembles).  NIS provides an identification service.  Sun also invented a distributed network file system called NFS (Network File System) which has gone through several revisions over the years.  NFS is unreliable and insecure and everybody knows it so they are very careful when implementing designs based on NFS.  NFS is widely used and very successful.

A consortium called Project Athena lead by MIT invented Kerberos.  Kerberos can work with or without NIS and it authenticates programs.  When one program wishes to authenticate another program, the first program asks to see the second programs' kerberos ticket.  The first program than verifies that the ticket is a valid ticket.  The tickets are encrypted, valid only for a little while (typically seconds to hours), and used only once.  However, the kerberos system is highly distributed and could easily take care of the authentication needs of the entire planet.  Since Kerberos is open source, no company that wants to control the internet would dream of using it.  Organizations that wish to be free of such controls use kerberos routinely, it works quite well.

In the model above,

  1. Identification - NIS
  2. Authentication - Kerberos
  3. Authorization - Each machine handles authorization internally
  4. Server - NFS
So long as one stays within the limitations of these systems, which are widely discussed and generally known, these systems work well.

NetBIOS


The Microsoft UNIX services for MS-Windows

Microsoft realized that UNIX was not going to go away.  For that matter, Novell hasn't gone away, either.  So, Microsoft created an optional product called UNIX services for MS-Windows that allowed a machine running Windows/NT 4.0 (or later) to interoperate with UNIX machines.  Sounds good, that is was Samba does.

But interoperability only on Microsoft's terms.  In particular, Microsoft developed their own Kerberos server that used one of the unused fields in a ticket for authorization, which was a violation of the spirit if not the letter of the Kerberos protocol.  As a practical consequence, a Microsoft Kerberos client cannot use a UNIX kerberos server because the Microsoft Kerberos client needs authorization which the UNIX kerberos server cannot provide.  However, a UNIX kerberos client could use a Microsoft Kerberos server because the UNIX kerberos client ignores the authorization information.  Essentially, Microsoft tried to take over the servers.  Microsoft failed - the UNIX people howled long and loud.  It is irrelevant now, because there are third parties that give away a proper kerberos client for Windows.


Pointer to the table of contents.

Money and software quality

In a competitive marketplace, companies have to provide better products that meet people's needs in order to survive.  In a monopolistic market, they do not.



DLL Hell

One of the components of MS-Windows is a Dynamic Link Library, or DLL.  A DLL is roughly equivalent to a shared library in the Linux world, with some key differences:

  1. Linux shared libraries have version numbers associated with them, and a mechanism for defaulting if no specific version is required.
  2. On occaision, Microsoft has shipped new DLLs without changing the version number, so that the only way to tell if you have a proper DLL is by measuring its size, in bytes.
  3. Installation of applications from Microsoft has been known to replace DLLs with utter disregard for how this affects other components in a system.  This is DLL Hell; application A needs version X of a given DLL, while application B needs version Y. 

Pointer to the table of contents.

Conclussion

There are several conclusions from all this, most of which I hope are a little surprising.

Microsoft is the best thing to happen to the UNIX community.  Microsoft forced us to think about issues of usability in a way we hadn't thought of them before. However, Microsoft has engaged in a series of dirty tricks, and those dirty tricks have colored the views of a lot of people.  I'm one of them.  Fortunately, a lot of people in the open source community decided that just because Microsoft did something, did not necessarily mean it was a bad thing.  However, Microsoft has a miserable track record for corporate ethics both with its partners, its employees, its customers and its competitors, and whenever Microsoft says something I am sceptical in the extreme.  Microsoft, by developing less efficient and less efficient operating systems, has spurred the hardware vendors to develop faster and faster machines.  For what I do, which is computationally intensive, that is A Good Thing.  For most applications, speed is irrelevant.  For some applications (games, a lot of scientific work) speed is important. 

The best computer system for non-computer experts to buy is.... the Apple Macintosh. The Macintosh is very easy to use, tends to be reliable as hell, but it is somewhat pricey. I think that, for most applications, the cost of the hardware is swamped by the cost (or value) of the people who use the systems, so I think it makes sense to buy Macintoshes. With OS X, the Mac has a real UNIX system under the hood, so if you have to run UNIX applications, you can.  The Macintosh proves that a UNIX system need not be hard be hard to use. While I wish Apple had open sourced Mac OS X (the underlying Kernel, the Mach operating system, is open source, but the easy-to-use user interface on top of Mach is proprietary), I recognize that the decision is their perogative.

Responsibility for security lies with the management of the organization. If management tells us geeks to write secure software, that security and reliability are design goals, then we know how to do that. If schedule and cost are design goals, we know how to do that, too. If performance is a design goal, well, we can do that as well. But software technology has not advanced to the point where we can do security, reliability, cost, features, and performance all at the same time.

Pointer to the table of contents.


I am always anxious to get feedback, please contact me with any comments, criticisms, suggestions, or questions.  I also have an older  unbiased comparison of operating systems.

®MS-Windows is a registered trademark of the Microsoft corporation

®Linux is a registered trademark of Linus Torvalds.


.  Validate this page with  this link. Validate the CSS on this page with this link.

Valid HTML 4.01!   
Valid CSS!

see new web server .web server .web server .