Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software

Keeping Passwords Embedded In Code Secure? 130

JPyObjC Dude asks: "When designing any system that requires automated privileged access to databases or services, developers often rely on hard coding (embedding) passwords within the source code. This is obviously a bad practice as the password is then made available to anybody who has access to the source code (eg. software source control). Putting the passwords in configuration files is another practice, but it is still quite insecure as cracking hashed passwords from a text file is a trivial exercise. What do you do to manage your application passwords so that your system can run completely automated and yet make it difficult for hackers to get their hands on this precious information?"
This discussion has been archived. No new comments can be posted.

Keeping Passwords Embedded In Code Secure?

Comments Filter:
  • Passwords suck (Score:5, Informative)

    by kunwon1 ( 795332 ) * <dave.j.moore@gmail.com> on Saturday December 30, 2006 @01:55AM (#17406658) Homepage
    Use SSL with certificates. It's more easily automated and just about anything worth running has the option.
    • Re: (Score:3, Insightful)

      by xenocide2 ( 231786 )
      But what do you do when you need to revoke the cert? The problem is that they want authentication without the rigor of.. authentication.
    • Use SSL with certificates. It's more easily automated and just about anything worth running has the option.

      Makes little difference from a security standpoint, though. If the attacker can get at the file system, then he can read the private key.

      • Ok, so you use SSL with client certificates for authenticating against the server, but you are worried that the evil doers might simply read the keys right off the disk.

        In that case you can do what Apache and openssh does to protect that kind of information:

        Encrypt the keys and store them on disk.

        When the program starts (maybe it's just a keyholder process) then the user is prompted for a passphrase and the key is stored in memory, if the keyholder sees any processes it doesn't like on the machine (like a d
        • When the program starts (maybe it's just a keyholder process) then the user is prompted for a passphrase and the key is stored in memory

          And thus you're right back to the initial problem: Either you have to have an attended startup (and restart) process, or else you have to store a password somewhere on the system.

          • by Dion ( 10186 )
            Well, the OP didn't say that you weren't allowed to request a password at boot.

            I think most people are missing the point in this question any way, it seems as though the OP wants to let applications access services with all the same credentials and keep those credentials from the user, at this point the you have already lost as that's simply impossible (see DRM).

            A better way would be to write a (trusted) server that the (untrusted) clients can talk to in stead of letting the clients talk directly to the bac
            • Well, the OP didn't say that you weren't allowed to request a password at boot.

              Agreed. The OP didn't say much at all. However, the common reason that developers want to put passwords in their code is to allow the app to access a resource that requires authentication. The most common example of that is a web server that needs access to a database.

              it seems as though the OP wants to let applications access services with all the same credentials and keep those credentials from the user

              I don't see that in the original question at all.

              A better way would be to write a (trusted) server that the (untrusted) clients can talk to in stead of letting the clients talk directly to the backend database and services with the security problems that creates.

              I think that is probably the architecture under discussion. The question, then, is how the trusted server obtains the credentials it needs to access the database and services.

              • by Dion ( 10186 )
                If the OP is talking about a webserver then the answer is to simply put the credentials in a config file and secure the machine as usual.

                If the machine that the server runs on is trusted then it just reads the config file.

                If the machine is under an untrusted users control then a trusted server must be implemented that enforces the limitations needed for that user and the credentials for that service can be stored in a config file on the untrusted client.

                The only time you would ever be worried about storing
                • If the OP is talking about a webserver then the answer is to simply put the credentials in a config file and secure the machine as usual.

                  Well, that's what you do because you have no other choice, but it doesn't mean it's secure. Any attacker who gets access to the file system then has access to whatever resources the web app uses.

                  I agree that a given credential should always have the minimum necessary set of permissions, but it's not uncommon, especially for a web app, that the minimum necessary set is full CRUD access to the entire set of tables.

                  • by Dion ( 10186 )
                    Well, it's as secure as possible.

                    If the attacker already has a level of access to the system that allows both access to the config file (it might only need to be readable by root) and has network access to the database then you have already lost.

                    • Well, it's as secure as possible.

                      If the attacker already has a level of access to the system that allows both access to the config file (it might only need to be readable by root) and has network access to the database then you have already lost.

                      I would agree that it's as secure as practical... not as secure as possible.

                      If possible, you shouldassume that your Internet-facing hosts may be compromised, and try to arrange it so that compromise of those hosts doesn't lead to uncontrolled access to other, more critical resources, such as the database. Which is why it would be nice to avoid putting the DB password in a config file or in the source code. Unfortunately, that goal is mostly incompatible with the goal of supporting unattended restarts.

    • by ryanr ( 30917 ) *
      I don't see how an SSL certificate is going to help here. Since he's talking about authenticating clients, it would be a client certificate, which would have to be embedded in the app, same as a static password.
  • Just `strings` and patience.
    • by mark-t ( 151149 ) <markt AT nerdflat DOT com> on Saturday December 30, 2006 @03:41AM (#17407192) Journal

      As a developer who has hardcoded passwords into applications before, I can safely say that using 'strings' would NOT have worked, as I never actually create d a string for such a password -- rather, I would implement a backdoor password as an FSM, with each state having its own separate case code that compares a character in the string entered to a single character from the actual password. Any deviance from the path for the FSM would fall through to the normal password handling facility, using the characters entered so far as the string entered. The passwords in such a case were non-trivial, between 20 and 40 characters, including combinations of letters, numbers, punctuation, and blanks, so the likelihood of stumbling across them accidently was remote to the extreme. Changing the password was only possible with access to the source code, and was done in a way that was simple to maintain, but in the over 10 years of use that these programs received in the companies that they were written for, the security of these hard-coded passwords were never compromised (because of the industry we were in, if it had been, we would have heard about it because the way we wrote it, it would have caused a panic).

      It's probably not something I'd ever do these days, but back in the 80's and early 90's, it worked very well.

      • by ryanr ( 30917 ) *
        You don't think that's actually safe, do you? You've just made it more complicated (aka fun to crack.)
        • by mark-t ( 151149 )
          Well, in the environment we were in, it was as safe as the physical security on the building, and as nobody else would have ever had reason to suspect that there was such a backdoor in the first place (we never told anyone until long after the software fell into disuse, and even then it wasn't deemed a security risk by anybody because phsyical access to the computer was necessary to enter the password anyways). In the 12 years that the software was used, we only ever needed to use the backdoor once... and
          • and as nobody else would have ever had reason to suspect that there was such a backdoor in the first place (we never told anyone until long after the software fell into disuse, and even then it wasn't deemed a security risk by anybody because phsyical access to the computer was necessary to enter the password anyways)

            Excuse me?

            Whenever I get apiece of software of which I cannot verify the source, I suspect a backdoor password being there. This is basic security and has been documented at least since the fir
            • by mark-t ( 151149 )

              Whenever I get apiece of software of which I cannot verify the source, I suspect a backdoor password being there

              Yeah... and...?

              I mean, so what if you suspect a backdoor being there, what do you do about it? Not use the software?

              This wasn't an option for the companies who contracted us to write the software for them... and no, we didn't tell them about the backdoor. Neither, however, did we ever actually use it except the one time in the 12 years that the software was being used that it was necessa

              • I mean, so what if you suspect a backdoor being there, what do you do about it? Not use the software?

                Generally spoken, that is indeed the correct answer.

                This wasn't an option for the companies who contracted us to write the software for them...

                Sure it was, they could have contracted someone else who gave them the possibility to review the source code.

                and no, we didn't tell them about the backdoor. Neither, however, did we ever actually use it except the one time in the 12 years that the software was being u
                • by Vellmont ( 569020 ) on Saturday December 30, 2006 @10:02AM (#17408416) Homepage

                  2. meeting me in court

                  So you're really going to spend tens of thousands of dollars to recover non-existant damages to prove a point? The conversation might go something like this:

                  Judge: I see you're suing for 10 million dollars, but you don't list your damages. How did the defendants actions hurt your business? Was there a security breach? Did the defendant not meet the terms of the contract?

                  You: Well not really. The contract didn't say anything about what I'm suing about. Nobody broke in and we had a lot of means to prevent it, but someone COULD have broken in. Basically this guy just made me real mad because I didn't agree with his security procedures. Dag-nab-it, the guy slightly increased my risks! We don't have any damages, I just assumed that whenever I don't like something, I just sue the pants off them.

                  Judge: Umm.. Right. Well sorry, civil courts operate on damage to one party caused by another. Criminal courts operate where criminal laws have been broken. Since there's no damages you can show, and no laws have been broken I'm throwing this case out. Didn't your lawyer tell you all this?

                  You: Only the first 10 lawyers. Then I found this really good one...or at least so I thought at the time. He charged he $20,000 and told me it'd be thrown out at the first hearing. I guess I should have gotten a better lawyer.
                  • So you're really going to spend tens of thousands of dollars to recover non-existant damages to prove a point? The conversation might go something like this:

                    Since the company I am working for, and for which I am responsible for security, works a lot with sensitive information from customers, the risk of losing their trust is quite there, even more so if it becomes publicly known that such a backdoor existed. In that case there wold be real damage even if no actual security breach ever took place.
                    • Re: (Score:3, Informative)

                      by mark-t ( 151149 )
                      The danger of it ever having become publicly known that there was a backdoor was negligible... the number of companies that we wrote software for was countable on one hand, and being vertical market software, there was no danger of it being used elsewhere.
                    • So essentially you're saying that your job is to recommend against closed-source software. That's great.

                      No it is not. Giving my company access to the source code can be based on a NDA, and in no way requires you to produce open source software.
                      WE want to be able to verify that no backdoor exists. Alternatively, we could arrange a guarantee by means of a contract that no such thing exists with a very hefty penalty attached if it turns out otherwise.

                      That is not, however, what this guy is talking about at all
                    • The danger of it ever having become publicly known that there was a backdoor was negligible... the number of companies that we wrote software for was countable on one hand, and being vertical market software, there was no danger of it being used elsewhere.

                      Well, you just made it public..
                    • by mark-t ( 151149 )

                      Yep... some years after the software is no longer used... so it's not an issue. But I can virtually guarantee that nobody on slashdot knows which companies used it or even what software I am talking about. As I said, the number of companies that used the programs was countable on one hand and the other programmer and I personally knew every employee of the companies that contracted us to write software for them, which is why we were able to contain the security risks involved.

                      Like I said before though,

                    • Like I said before though, it's not a mechanism I would use today.

                      I got that part, but the issue here is this:

                      As I mentioned before, in itself, there are valid reasons to have a backdoor, and as you described the one incident where you made use of it, it doesn't sound like your use of it was invalid at all.

                      The issue is that by implementing it, and by not informing your customer about it, you exposed them to a security problem that they could not judge, and that is what I take issue with.

                      Today your use of a
                    • by mark-t ( 151149 )
                      What has changed is that computers are much networked than they were back then, so remote access would almost certainly be expected, even _IF_ it wasn't explicitly a design requirement. As a result, we would not be able to make any assumption that arbitrary people might not be able to make attempts at breaking the software. In the exact same situation, the likelihood of a backdoor being discovered today would still be the same as it was then, but the world isn't in the same situation it was back then, so
                    • What has changed is that computers are much networked than they were back then, so remote access would almost certainly be expected, even _IF_ it wasn't explicitly a design requirement. As a result, we would not be able to make any assumption that arbitrary people might not be able to make attempts at breaking the software.

                      Well, I understand your assumption here, but a substantial part, if not the majority of all security breaches are inside jobs, and not some random network based hacker. You sure you also
                    • by mark-t ( 151149 )
                      Actually, the assumption was correct... as exemplified by how the software was used. It probably would not have been possible to make that assumption if we did not know who was using our software... but we did, so we could. Today, we could not... so I would not.
                • I mean, so what if you suspect a backdoor being there, what do you do about it? Not use the software?

                  Generally spoken, that is indeed the correct answer.

                  So what do you do when you suspect a backdoor in the software published by a monopoly or by each member of an oligopoly? Do you put your business on hold for 20 years waiting for the patent to run out?

                  Sure it was, they could have contracted someone else who gave them the possibility to review the source code.

                  Unless it is not commonplace for the monopoly or among the members of the oligopoly to allow third parties to review the source code.

                  • So what do you do when you suspect a backdoor in the software published by a monopoly or by each member of an oligopoly? Do you put your business on hold for 20 years waiting for the patent to run out?

                    Pay someone to write an alternative, and live in a place where software patents are not valid to begin with. And yes, we did the first, and yes, I am living in a place where software patents are not valid.

                    On top of that, making sure 3rd parties do not get access to data about our customers is actually a legal
                    • Pay someone to write an alternative

                      An entire operating system, from the ground up?

                      and live in a place where software patents are not valid to begin with

                      Which such developed country has a permissive immigration policy?

                      EU law

                      Have EU courts routinely dismissed patent infringement cases on grounds that on methods of communication involving arguably novel processes of data processing are not valid subject matter for a patent?

                • by mark-t ( 151149 )

                  Sure it was, they could have contracted someone else who gave them the possibility to review the source code.

                  I guess they could have done... but nobody else that worked for any of the companies that contracted us would have had even the slightest clue on how to do that, so they would have had to hire somebody else. If other programmers were going to review our source code (which we would know about, since the code was at a site that we controlled, and there was no remote access to it), it would DEFINITELY

            • by drsmithy ( 35869 )

              Whenever I get apiece of software of which I cannot verify the source, I suspect a backdoor password being there.

              Do you have source code to all your hardware's firmware and the complete schematic's to its design ?

              • Do you have source code to all your hardware's firmware and the complete schematic's to its design ?

                For the bios we have the source code, yes. Complete hardware schematics not of everything to the detail that we would want, but enough to verify its workings. With regards to hardware the requirement is slightly different however, having the schematics is only a small part of the picture, being allowed to verify that machines are produced in a secure environment and according to the published specs is at leas
  • by ari_j ( 90255 ) on Saturday December 30, 2006 @01:55AM (#17406668)
    I wasn't aware that it was a common practice to store database passwords as hashed strings in configuration files. Does your program run a brute-force attack against the hash every time it needs to create a database connection?
  • On-disk passwords simply aren't secure. If you need automation, you want to secure the systems as much as possible.

    That said, salted hashes are pretty tough to crack. Changing the passwords regularly will make it unrealistic for a cracker to obtain the passwords through brute force.
    • Re:No answer (Score:5, Insightful)

      by FooAtWFU ( 699187 ) on Saturday December 30, 2006 @02:10AM (#17406762) Homepage
      That said, salted hashes are pretty tough to crack. Changing the passwords regularly will make it unrealistic for a cracker to obtain the passwords through brute force.
      I don't think this is really the problem - the problem is that you have something like, say, a fairly standard sort of command you might find in a MySQL database. You might get the strings from a config file, but you need to pass the password as plaintext:

      mysql_connect('dbserver.foo.org','apache', 'z*UIYD!0');
      or similar credentials.

      And you know what? That's not secure. But then again, the database it's connecting to should be as firewalled as all get-out, and even if it's NOT firewalled, it should have host-based authentication so that you can only access it with that password from the appropriate machine (your web server). At that point, if someone can hook into your LAN to sniff traffic or spoof things, you're probably in deep trouble anyway - but perhaps you could configure the database server to only accept connections over a VPN of some sort with appropriate authentication certificates.

      • Re:No answer (Score:4, Interesting)

        by theshowmecanuck ( 703852 ) on Saturday December 30, 2006 @04:39AM (#17407344) Journal
        Someone mod parent up more. As stated, DB access usually happens over an internal network (99% of the time) and only the outside interface of the web server is open to the public (assuming it is an app that is accessible to the world and not an internal app anyway). On bigger apps, only the model components on the backing app server(s) should be doing the DB access etc. and that should definitely be behind the firewall along with the DB. In all cases if the firewalled internal network is compromised, you are really screwed anyway, so what does encryption etc. matter? Unless you don't trust the people who administer your apps and can wreck you business more easily by not doing backups and using a baseball bat on the hard drives or something equally brutal.
  • You can't secure a client-side password without another password to protect it. Which is contrary to what you're trying to accomplish. If you could give a bit more detail about what you're trying to accomplish, we could probably better enumerate the trade-offs.
  • Public-key crypto? (Score:3, Insightful)

    by FlyByPC ( 841016 ) on Saturday December 30, 2006 @02:04AM (#17406732) Homepage
    IANRAProgrammer, but...

    I believe public-key cryptography could do this. Encode the public key (several kilobits, if you're paranoid)? in the source, and have the program use it to authenticate the secret key given by the user. Publish the source code on YouTube for all the good it will do an adversary, right?
    • Re: (Score:3, Informative)

      by strider44 ( 650833 )
      Nope, still won't work against a cracker. This is just another form of DRM and DRM is fundamentally flawed. (If you don't believe me show me a major game that hasn't yet been cracked.)

      In short, if a cracker has full access of a program or system and the system has access to the passwords (even if it does some fiddling around before revealing the passwords) then the cracker has full access to the passwords. There's no way to protect against that except by not allowing any access to the passwords (by ju
      • Apologies - I misinterpreted obviously what you said (I thought that you meant for communicating between processes not someone who is using it actually inputting a password). Actually your idea will work against a cracker and I just made a fool of myself, damn no edit key. It's not the most practical solution since there would be only one password that every user must have and it also won't give automation like the summary wants but it will still work against someone without that password.
    • What you're describing is effective--and as easy as wrapping your communication with your db with libssl.

      What the user appears to be trying to accomplish is allowing db access without querying the user for a password. To do this, he believes he needs to embed the authentication credentials in the application or its configuration files. To that end, he's asking how Slashdot folk do this securely.

      If it's assumed that a person using the software is authorized to access the DB, because the person has access t
  • Putting the passwords in configuration files is another practice, but it is still quite insecure as cracking hashed passwords from a text file is a trivial exercise.

    This simply isn't true. If salting is used (which is quite commonplace these days), it's pretty much going to be impossible to recover the password from the hash.
    • Re: (Score:3, Insightful)

      by Nos. ( 179609 )

      The problem with this is.... how does the program get the password it needs? If its encrypted with a salt...well, that's one way, so the program would have to do a brute force everytime it wanted to use that password.

      There's little point to encrypting a locally stored password, as the decryption technique must be relatively simple to allow the program to access it. The idea is to secure everything around it, including the system that is being connected to. Use host based authentication, firewalls, etc.

    • Not quite correct. The problem is, in order to verify that the password matches the hash, the program needs to know how to salt it. Which means that the salt has to be stored somewhere and applied to the password in some known way, which means it's basically an extension of the password (which you can't assume your opponent won't know).

      The point of salting isn't to protect an individual password. It can be as easily brute-forced/dictionaried as anything. As I understand it, the point of a salt is that t
  • Kerberos (Score:4, Informative)

    by forlornhope ( 688722 ) on Saturday December 30, 2006 @02:11AM (#17406766) Homepage
    Kerberos was built for just this situation. Read up on it. I think its even available as Active Directory for MS.
    • Kerberos was built for just this situation. Read up on it. I think its even available as Active Directory for MS.

      You're right that MS provides a bastardized version of KERBEROS, but wrong that it helps.

      In order to get an authentication ticket from the ticket-granting server, you have to authenticate to the ticket-granting server. If the machine can start up completely unattended, that means it has the KERBEROS authentication credentials stored on disk somewhere, which means the attacker can get them, and can then authenticate to the ticket-granting server and get whatever authentication tokens he needs.

      • by ryanr ( 30917 ) *
        Under exactly the right circumstances (i.e. all of your userbase always logs into the domain before running the database client app in question), this pushes the authentication problem to exactly where he wants it. Unfortunately, the original poster hasn't given nearly enough information to tell if Kerberos/AD/any-other-SSO will help his situation.

        But as you've indicated, for other situations this won't help.
        • Under exactly the right circumstances (i.e. all of your userbase always logs into the domain before running the database client app in question), this pushes the authentication problem to exactly where he wants it. Unfortunately, the original poster hasn't given nearly enough information to tell if Kerberos/AD/any-other-SSO will help his situation.

          But as you've indicated, for other situations this won't help.

          Yeah, it seemed to me he was talking about completely unattended startup of a server that requires access to a database (or whatever).

  • 'chown apache:apache database.conf && chmod 600 database.conf' That's good enough for me. Generally I'm not concerned with people accessing the physical hardware in order to bypass permissions, that's another issue entirely.
    • Ever worked anywhere where security concerns meant that the UNIX Admins aren't supposed to have access to the database contents?

      It gets much more fun.
      • So as admins do they have access to the database program binary? What happens if they alter it to allow them access or just dump the data elsewhere when launched? Checksums and IDS against ur own admins?
  • The better practice would be to make raw access a non-issue. Don't give the user account the privileges to accomplish anything that wouldn't be possible with the application itself. If you're using some sort of SQL database, consider limiting the permissions on the account to stored procedures that correspond your application's features.
  • Wrong Question (Score:5, Interesting)

    by eric.t.f.bat ( 102290 ) on Saturday December 30, 2006 @02:42AM (#17406898)
    First: only an idiot would put a password into source code. That's what configuration files are for. What, you want to have to edit a script every time the password changes? Second, there's no point encoding, encrypting or otherwise "securing" the configuration file. If a user has access to your configuration files, he has access to everything else, and all your security is useless. So really the question is: I don't want the neighbours to see me naked. What should I tattoo on my butt-cheeks to make me safe?
    • But , if everything else of importance is encrypted. or , heaven forbid, you have multiple alyers of security locking down each part of your network. You *don't have a problem. with enough logging you should catch him in his futile attempts to cause mistchif, kick him out and fix what ever hole he used to get in. Cosider it the bear trap model of security. Sure anyone can walk around a single bear trap, but 5,000 closely laid in a confined area? thats more difficult, and much more fun to watch on a hidden s
    • I disagree. I believe in multiple layers of security. If one layer fails, there are still mechanisms in place to keep an attacker out. How does a user have "access to everything else" if they have access to your configuration files? What if the security on your configuration file was misconfigured? Or the mechanism that protects your configuration file has a flaw in it? What if it is stored in a publicly accessible directory, and "everything else" is not? If the information is encrypted, the attacker may be
    • I don't want the neighbours to see me naked. What should I tattoo on my butt-cheeks to make me safe?

      Goatse?
  • If you want to do something quick and dirty without bothering to code in some robust password mechanism (let's say you want to use the same old password every time, hard coded as you say), why not creat a function to generate the static password using deterministic methods? People with access to the source code wil lbe able to spot what you've done, but at least they won't know the password without actually running the code. You could try to obfuscate the function as much as possible and store the generat
    • by flonker ( 526111 )
      More along these lines, create a seed password, set it in the source and in the database. Have the application randomly (this is the hard part) change the password in a non deterministic manner, changing it first in a backup config file, then on the database server, then in the main config file. (In case of failure, the admin can copy the password from the backup config file to the main config file.) Possibly have the application rotate the password every so often. This protects the password from someon
    • by ryanr ( 30917 ) *
      People try to do this sort of thing all the time. It's not actually secure, of course, but it makes for some entertainment. If you want some to play with, Google "crackme."
  • w/o trusting root, your whole application comes crashing down. All the chmodding in the world won't save you from root.

    The only other way to do this would be to have your app retrieve the key from a trusted remote location via SSL, then use it on the remote app... which is sounding more and more like a kerberos or mutual SSL key thing anyways.
  • by swillden ( 191260 ) * <shawn-ds@willden.org> on Saturday December 30, 2006 @03:28AM (#17407114) Journal

    First, let me dispose of one issue:

    This is obviously a bad practice as the password is then made available to anybody who has access to the source code (eg. software source control).

    It's much, much worse than that, because the password is also available to anybody who has access to the binary. "man strings".

    Others have suggested various options, but absolutely none of them work.

    • "Shrouding" passwords, whether in code or in config files. Don't make me laugh. No matter how you try to obfuscate the password, all of the code needed to recover the password (or the hash, or whatever needs to be submitted to perform the authentication) is there, just waiting to be dug out. You can make it obscure, but you can't make it secure.
    • Public key authentication? Bzzzzt. The private key has to be present on the file system, where an attacker can grab it. "So, encrypt it!", I hear. Umm, you have to have the passphrase to decrypt it somewhere in your code or config files.
    • Kerberos? You still have to have some mechanism for authenticating to the ticket-granting server, and if the attacker can get that, then he can also authenticate, just like you.
    • Host security module? TPM with auth credentials bound? Well, these do protect against some attacks, but if the attacker can own the server, he can use the hardware token to do the authentication for him, just as though he were the server. These do prevent him from being able to take advantage of physical access to the machine to reboot it with another OS and then dig through the drive contents. Assuming the system is configured tightly enough that booting a different configuration is the only way in, then a TCPA TPM actually does the job. This of course, requires that the system have no exploitable security holes (ha!).

    The bottom line is: If the machine has all of the information needed to perform the authentication without human intervention, then an attacker who gains control of that machine has all of the information needed to perform the authentication. Period. No getting around it. The best you can do is limit the damage in the case where the attacker has only partial access.

    What is that best? For a network-accessible machine, do the following:

    • Lock down the system as tightly as possible. Standard system security stuff, but be as hardcore about it as you can.
    • Use an authentication protocol that can be performed between a highly-secure HSM and the remote resource, using the main machine as a passthrough only.
    • Secure the HSM with a password or authentication key, so that the HSM won't do its authentication job without first being authorized.
    • Use a TPM to bind the HSM authentication data to the system state. This will make patches a PITA, but we're going for maximum security here, so that's okay.
    • Put the whole assemblage in a secure facility, ensuring (hopefully) that no potential attacker gains physical access to the machine.

    That's a lot of work, and it's still not completely secure. Luckily, very little needs even that level of security. Oh, and there aren't any OSes available that make good use of a TPM yet, so it's not really possible.

    For most systems, what I'd really recommend is: Put the auth credentials in plaintext in a config file and limit access to that file to the bare minimum. If you have Mandatory Access Controls (e.g. SELinux), configure them to allow only the server process to read that file. Then, lock the whole system down as tightly as possible (within existing constraints). Ensure that a bare minimum number of people have logins on the machine, and that they all have minimum permissions, firewall it as completely as possible, and keep it up to date on security patches. Finally, put it in a locked room and tightly control physical access to it.

    Of course, even this reduced-security approach is too onerous in many cases, so you have to make compromises. That's where a good understanding of security and plenty of hard thinking about what compromises can be made come in.

    There ain't no silver bullet.

    • by swillden ( 191260 ) * <shawn-ds@willden.org> on Saturday December 30, 2006 @03:38AM (#17407172) Journal

      Responding to myself... Uh oh.

      It occurs to me that I may be answering the wrong question. If the assumption is that the attacker won't have access to the server, but may have access to the development team's source code, then the answer is simple: put the password in a config file that the developers don't have access to.

      • Wrong question or not, your answer was awesome. It confirmed a lot of what I already believed to be true and gave me some new tidbits of information as well. Many thanks. (o:
    • by chthon ( 580889 )

      A very good summary of what I found out myself.

      I have the same problem, and what I did was just use no password at all, but create different roles for the system.

      Our programs have only a certain role in which they can insert or update only certain parts of the database. Really sensitive tasks must always be done by an operator, who has to log in manually.

      Unfortunately, we are using mySQL, which is not as rigorous. For update actions the restricted role must also have query capabilities.

      I think that by u

    • by Bishop ( 4500 )
      Parent is correct (unlike so many other posts). Storing the password in the clear in a config file is good enough in most cases. Obviously you want to restrict access to that file. Attempts to obfuscate the password are pointless. If an attacker can read the config file, then they can probably read the processes memory.
    • Re: (Score:2, Insightful)

      by Anonymous Coward
      Shrouding passwords is terrific, as it makes customers, QA and marketing shut the hell up.

      Where I work, we have a product that needs to store a shared encryption key for communications. The interaction with customers, QA, and marketing went like this:

      Them: OMG, the password is there in plain text

      Us: The password is in a file readable by root only, as is the install directory. If you can read it, you already pwn the box

      Them: OMG, the password is there in plain text

      Us: The product has to run unattended as root. There's nothing sensible we can do about it.

      Them: OMG, the password is there in plain text

      End result: we changed the program to encrypt the password using a fixed key. Customers, QA and marketing finally shuts the hell up.

      • Yes, shrouding passwords can have benefits unrelated to security. As long those who are evaluating security realize this, there's no problem. In order to avoid looking stupid for the occasional customer who *does* understand security, I hope your manual includes a statement like "The obfuscation of the password prevents casual viewing by system administrators, but provides no real security for the password. Security is provided by proper system configuration, limiting access to the file containing the pa
      • by vrmlguy ( 120854 )
        "Them: OMG, the password is there in plain text" ... which is why I always rot-13 any passwords I embed in my source code.
    • by uradu ( 10768 )
      Great statement of the facts, should be required reading for any middle and upper management. Sadly, this scenario is extremely common in enterprise environments, where there are tons of unattended custom gateway and batch processing type applications running on various servers, transferring data from one system to another and manipulating/massaging it while doing so. Typically these apps are either boot time services or stand-alone apps that get kicked off at boot time, without any user intervention. As su
  • It's all about managing your risk: This is what I do:

    • The password is loaded from a configuration file
    • The password (in the configuration file) is encrypted
    • The encryption key is stored in a script that's only accessable by a generic system account
    • The automated job runs in a system that has to store the password; the stored password is only readable by trusted employees

    Is the system 100% secure? No. Is the system secure enough? Yes! The key is risk management; the probability of our system being co

    • Why not cut out the stored encrypted password bit and have the password only stored where those trusted employees can access? Sounds like a simple user/file permission system really.
      • by GWBasic ( 900357 )
        Why not cut out the stored encrypted password bit and have the password only stored where those trusted employees can access? Sounds like a simple user/file permission system really.

        That's what I do; it's an automated system.

  • I see a lot of elaborate answers, but we all seem to be forgetting something obvious. When the service comes up, have it prompt an administrator for the password then store it in memory. Ultimately this is only obfuscation, but despite passwords getting stored in memory all the time and I think the rate of compromise remains fairly low. At any rate, it is a lot less likely an attacker will find it there than in a plaintext file on the disk. Apache HTTPD and all the MTA services I use do this when using

    • Are you trolling or missing the point? If it is to be an automated system, you can't have the admin manually put in the password each time it starts. So how do you replace that? Having either the program itself read from a config file or even another program supply it doesn't solve the problem of how/where to securely store the password for these methods to work.

      Eventually you seem to have to trust root and file permissions that the programs and config files can only be accessed by those you trust to do so,
      • To compliment comments made by the first response, there are only two situations where you need an administrator to supply the password. Once when the system is first brought online and then every time afterwards the system experiences a critical fault or scheduled maintenance that requires services to be restarted. In both cases, there has to be staff available. Especially if a system goes down (which it should not typically do) then there is likely a problem that demands attention. Otherwise, under no

  • First thing, storing passwords is a bad idea but sometimes cannot be avoided. There are a few things that can be done. None can really prevent someone from dumping the memory contents because, unless you use more sophisticated client/server validation (based on IP, MAC, host auth, etc.), someone with the right privileges can core dump the system or strace the process. Yes, if someone has access to strace a process you probably have bigger issues, but it's conceivable in a DMZ environment where a particular
  • The only secure way on current hardware for automated authentication is not to embed passwords in source code. If you're willing to use extra hardware, your best bet is a smart card.
  • Don't authenitcate the application to the database. Authenticate the user to the database. The user supplies a credential (password, certificate, biometric, etc), and the application, acting on the user's behalf, forwards the credential to the database. The database consumes the credential, performs authorization, and delivers data to the user, again through the application.

    Kerberos provides a great mechanism for this. Using pkinit, you can use various credential types. Or, stick to the basics and use
  • Alas there are no real good way to keep things secure in source code. But there are a few good ways to keep things afloat anyways:

    Q. Your coders are not to be trusted
    A. Put a file containing a security token (using the generic term token here, depending on what you use - certificate, or others). Open the file, read the token, send it to your server
    A2. Use SSL tunneling, using aforementioned certificate, add another file for server details
    A3. Create a "mirror database" with all important information replaced
  • by Midnight Warrior ( 32619 ) on Saturday December 30, 2006 @12:48PM (#17409468) Homepage

    Encrypted file systems have a similar problem. They need to decrypt the filesystem for authorized boots or mounts, but need to stay encrypted otherwise. One common trick here is to only make the decryption key available once, at start up, after which it is put into memory, preferable with a small amount of obfuscation to slow down memory walkers. You could then use something like FUSE [freshmeat.net] to mount the encrypted filesystem with your plaintext password.

    As other folks have wisely pointed out though, the best posture is to use mandatory access control and restrict access to the configuration file. If you have the privileges, another good practices involves removing all compilers from the machine, firewalling all FTP traffic in or out, firewalling egress (outbound) HTTP traffic (pull in files to process), restrict SSH traffic to pre-defined nodes and enforcing that with a firewall ruleset. Preferably, you'd make all the firewall stuff occur on a separate box. What this does is restrict what tools will be available to an attacker. You can also remove fun programs like strings, ldd, od, *hexedit, and so on. "But I need to modify these tools!" you say. Leave SVN or CVS clients on the node, check your changes into SVN/CVS on your test bed machine, and then just check out the latest stable branch on your exposed machine. Then you get good protection and good configuration management all in one swoop.

    Other tricks involve establishing a proxy process or strict limiting what can be done with the compromised username/password. A proxy process might be a setuid C program that only does one thing and accepts no user input. If you must accept user input, be extremely strict (use sscanf on all inputs and limit the size of the buffer accepted) and then have an experienced C developer review your code for improper bounds handling. This proxy process might do things like move files to a read-only directory structure (static web pages in a DMZ), or it might be a CGI script that updates rows in a database. We've actually used the CGI script idea because it a) it a cross-platform way of talking to the database, b) is a good decoupler of otherwise complex code, and c) strongly limits what can be done as an attack. Be careful of the venerable SQL injection attack there though.

    A good use of a proxy process might be the transparent mounting/unmounting of an external USB drive, perhaps against a hidden partition on the stick. The drive would have your key. Sure it's obfuscation, but it's complicated enough to decode that it will slow somebody down for a while.

    The last trick is to limit what can be accomplished with the username/password that is obtained. We have some processes whose job is to inject data into the database for the backend to all of our tools. That database user is limited to select, insert, and update operations. With Oracle, I could even restrict which specific tables get which privileges.

    The best thing to do is to write a document that some folks call the Security Design Document to define your security posture, what you are known to protect against, and where you are vulnerable. Assign a risk mitigation matrix (vulnerability, threat, countermeasure, residual risk) row to each vulnerability. Be honest and then let your manager understand the position you've left them in and try to assign a cost to each countermeasure/mitigation so they can make a decision on what to close or leave open.

    You are always going to have vulnerabilities. Everyone does, even the best systems. What makes the difference is those who analyze, understand, and counter that risk in a way that is appropriate to the situation. Direct exposure to the Internet is a situation that should warrant better risk analysis, but rarely does.

  • A long time ago, I started this mentod of doing this on suso.org. It caught on and now I encourage all my customers to do it:

    http://www.suso.org/docs/databases/mysqlinfo.sdf [suso.org]
    http://www.suso.org/docs/databases/saferdbpassword s.sdf [suso.org]

    I've thought about trying to spread the word about it and even making an RFC, but I don't have the time for that.
  • by rlp ( 11898 ) on Saturday December 30, 2006 @03:25PM (#17410852)
    You've got a machine A on the interior LAN, that needs credentials to access a DB on machine B on the interior LAN. You've got two choices:

    1) You can store the credentials somewhere on machine A.
    2) The service (typically a Web server) on machine A can run with an account that's either has privileges to access the DB or has privileges to access credentials stored somewhere else to access the DB.

    If an intruder gets access to machine A and gets root / admin privileges - then they can gain access to the DB. Obviously, you're first priority is to make sure that this does not happen! Use good firewalls and firewall rules. Make proper use of a DMZ. Check your application for security problems (buffer overflow, SQL injection, etc). Keep up to date on patches. Your second line of defense, is to:

    1) Try to insure that an intruder is detected.
    2) Make them work for it (access to DB)
    3) Have a good audit trail
    4) Monitor your network and application

    I'll address item #2. Assume that you put the credentials in the configuration file or a separate file on machine A. You should encrypt the credentials (using an encryption application NOT kept on machine A). The key can be hard-coded in the (web) application. If you want, you can use layers of keys(Encrypted key b decodes key in config file. Encrypted key c decodes key b, Encrypted key d ...), but this quickly reaches a point of diminishing returns and can become a maintenance nightmare. You can obfuscate the key or even build it on the fly to make it more difficult to extract it from the application (for binary apps - it helps to strip the symbol table). Use the OS permissions to restrict access to the config / key file and the (web) application. This won't stop a determined intruder, but they'll have to work for access, and it will slow them down.
  • Are you asking about a "default" password that you want to have available immediately at installation? Generate it on the fly during the installation.
    This works well (all one line, of course):

    PASSWORD=`head -c 8 /dev/urandom | tr '\0-\377' 'a-zA-Z0-9a-zA-Z0-9a-zA-Z0-9a-zA-Z0-9@@@@####'`

    Stick in a configuration file with restricted permissions and mail the location of the file to root so that the admin can change it.
  • I'm working on software with a similar problem--I want to store the SQL database username and password in some halfway-to-secure fashion, because leaving it in cleartext in the PHP is just asking for the database to be compromised. So, the only alternatives are to encrypt it within the code or to put it in an external file. The external file makes it easier to change the username and password after the fact, so that's where I'm going.

    Problem here is that the contents of the file need to be encrypted in so

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...