Saturday, August 15, 2009
Sunday, November 30, 2008
What is Cyrus SASL
Cyrus SASL for System Administrators
This document covers configuring SASL for system administrators, specifically those administrators who are installing a server that uses the Cyrus SASL library.What SASL is
SASL, the Simple Authentication and Security Layer, is a generic mechanism for protocols to accomplish authentication. Since protocols (such as SMTP or IMAP) use SASL, it is a natural place for code sharing between applications. Some notable applications that use SASL include Sendmail (versions 8.10.0 and higher) and Cyrus imapd (versions 1.6.0 and higher).Applications use the SASL library to tell it how to accomplish the SASL protocol exchange, and what the results were.
SASL is only a framework: specific SASL mechanisms govern the exact protocol exchange. If there are n protocols and m different ways of authenticating, SASL attempts to make it so only n plus m different specifications need be written instead of n times m different specifications. With the Cyrus SASL library, the mechanisms need only be written once, and they'll work with all servers that use it.
Authentication and authorization identifiers
An important concept to become familiar with is the difference between an "authorization identifier" and an "authentication identifer".- userid (user id, authorization id)
- The userid is the identifier an application uses to check allowable options. On my Unix system, the user "bovik" (the account of Harry Q. Bovik) is allowed to write to "/home/bovik" and it's subdirectories but not to "/etc".
- authid (authentication id)
- The authentication identifier is the identifier that is being checked. "bovik"'s password might be "qqqq", and the system will authenticate anyone who knows "qqqq" as "bovik". However, it's possible to authenticate as one user but act as another user. For instance, Harry might be away on vacation and assign one of his graduate students, Jane, to read his mail. He might then allow Jane to act as him merely by supplying her password and her id as authentication but requesting authorization as "bovik". So Jane might log in with an authentication identifier of "jane" and an authorization id of "bovik" and her own (Jane's) password.
Applications can set their own proxy policies; by default, the SASL library will only allow the same user to act for another (that is, userid must equal authid).
Realms
The Cyrus SASL library supports the concept of "realms". A realm is an abstract set of users and certain mechanisms authenticate users in a certain realm.In the simplest case, a single server on a single machine, the realm might be the fully-qualified domain name of the server. If the applications don't specify a realm to SASL, most mechanisms will default to this.
If a site wishes to share passwords between multiple machines, it might choose it's authentication realm as a domain name, such as "CMU.EDU". On the other hand, in order to prevent the entire site's security from being compromised when one machine is compromised, each server could have it's own realm. Certain mechanisms force the user (client side) to manually configure what realm they're in, making it harder for users to authenticate.
A single site might support multiple different realms. This can confuse applications that weren't written in anticipation of this; make sure your application can support it before adding users from different realms into sasldb with saslpasswd.
The Kerberos mechanisms treat the SASL realm as the Kerberos realm. Thus, the realm for Kerberos mechanisms defaults to the default Kerberos realm on the server. They may support cross-realm authentication; check your application on how it deals with this.
Some authentication mechanisms, such as PLAIN and CRAM-MD5, do not support the concept of realms.
How SASL works
How SASL works is governed by what mechanism the client and server choose to use and the exact implementation of that mechanism. This section describes the way these mechanisms act in the Cyrus SASL implementation.The PLAIN mechanism and sasl_checkpass() call
The PLAIN mechanism is not a secure method of authentication by itself. It is intended for connections that are being encrypted by another level. (For example, the IMAP command "STARTTLS" creates an encrypted connection over which PLAIN might be used.) The PLAIN mechanism works by transmitting a userid, an authentication id, and a password to the server, and the server then determines whether that is an allowable triple.The principal concern for system administrators is how the authentication and password are verified. The Cyrus SASL library is flexible in this regard:
- passwd
- /etc/passwd is supported innately in the library. Simply set the configuration option "pwcheck_method" to "passwd".
- shadow
- /etc/shadow is somewhat trickier. If the servers that use SASL run as root (such as Sendmail) there's no problem: just set the "pwcheck_method" option to "shadow". However, many daemons do not run as root for additional security, such as Cyrus imapd. In order for these servers to check passwords, they either need a helper program that runs as root, or need special privileges to read /etc/shadow. The easiest way is to give the server the rights to read /etc/shadow by, for instance, adding the cyrus user to the "shadow" group and then setting "pwcheck_method" to "shadow".
It is also possible to write a special PAM module that has the required privileges; default PAM setups do not (to my knowledge) come with this.
- kerberos_v4
- Kerberos v4, if found by the configuration script at compile time, can be enabled for plaintext password checking by setting "pwcheck_method" to "kerberos_v4". This is different from the KERBEROS_V4 mechanism discussed below---this configuration option merely specifies how to check plaintext passwords on the server.
- pam
- PAM, the pluggable authentication module, is the default way of authenticating users on Solaris and Linux. It can be configured to check passwords in many different ways: through Radius, through NIS, through LDAP, or even using the traditional /etc/passwd file. If you wish to use PAM for authentication and the Cyrus SASL library found the PAM library when it was configured at compilation time, it is the default (or set "pwcheck_method" to "PAM"). It uses PAM with the service name (for example, Sendmail uses "smtp" and Cyrus imapd uses "imap").
The PAM authentication for SASL only affects the plaintext authentication it does. It has no effect on the other mechanisms, so it is incorrect to try to use PAM to enforce additional restrictions beyond correct password on an application that uses SASL for authentication.
- sasldb
- This stores passwords in the SASL secrets database, the same database that stores the secrets for shared secret methods. Its principal advantage is that it means that the passwords used by the shared secrets mechanisms will be in sync with the plaintext password mechanisms. However, system built-in routines will not use sasldb.
Note that to set plaintext passwords in sasldb, you need to configure "saslpasswd" to do so. The "saslpasswd" uses the same configuration files like any SASL server. Make /usr/lib/sasl/saslpasswd.conf contain the line "pwcheck_method: sasldb" to instruct "saslpasswd" to create plaintext secrets in addition to the normal secrets.
- write your own
- Last, but not least, the most flexible method of authentication for PLAIN is to write your own. If you do so, any application that calls the "sasl_checkpass()" routine or uses PLAIN will invoke your code. The easiest place to modify the plaintext authentication routines is to modify the routine "_sasl_checkpass()" in the file lib/server.c to support a new method, and to add that method to lib/checkpw.c. Be sure to add a prototype in lib/saslint.h!
Shared secrets mechanisms
The Cyrus SASL library also supports some "shared secret" authentication methods: CRAM-MD5 and it's successor DIGEST-MD5. These methods rely on the client and the server sharing a "secret", usually a password. The server generates a challenge and the client a response proving that it knows the shared secret. This is much more secure than simply sending the secret over the wire proving that the client knows it.There's a downside: in order to verify such responses, the server must keep password equivalents in a database; if this database is compromised, it is the same as if every user's password for that realm is compromised.
The Cyrus SASL library stores these secrets in the /etc/sasldb database. Depending on the exact database method used (gdbm, ndbm, or db) the file may have different suffixes or may even have two different files ("sasldb.dir" and "sasldb.pag"). It is also possible for a server to define it's own way of storing authentication secrets. Currently, no application is known to do this.
The principle problem for a system administrator is to make sure that sasldb is properly protected; only the servers that need to read it to verify passwords should be able to. If there are any normal shell users on the system, they must not be able to read it.
Managing password changes is outside the scope of the library. However, system administrators should probably make a way of letting user's change their passwords available to users. The "saslpasswd" utility is provided to change the secrets in sasldb. It does not affect PAM, /etc/passwd, or any other standard system library; it only affects secrets stored in sasldb.
Finally, system administrators should think if they want to enable "auto_transition". If set, the library will automatically create secrets in sasldb when a user uses PLAIN to successfully authenticate. However, this means that the individual servers, such as imapd, need read/write access to sasldb, not just read access. By default, "auto_transition" is set to false; set it to true to enable. (There's no point in enabling this option if "pwcheck_method" is "sasldb".)
Kerberos mechanisms
The Cyrus SASL library also comes with two mechanisms that make use of Kerberos: KERBEROS_V4, which should be able to use any Kerberos v4 implementation, and GSSAPI (tested against MIT Kerberos 5 and Heimdal Kerberos 5). These mechanisms make use of the kerberos infrastructure and thus have no password database.Applications that wish to use a kerberos mechanism will need access to a service key, stored either in a "srvtab" file (Kerberos 4) or a "keytab" file (Kerberos 5). Currently, the keytab file location is not configurable and defaults to the system default (probably /etc/krb5.keytab).
The Kerberos 4 srvtab file location is configurable; by default it is /etc/srvtab, but this is modifiable by the "srvtab" option. Different SASL applications can use different srvtab files.
A SASL application must be able to read its srvtab or keytab file.
How to set configuration options
The Cyrus SASL library comes with a built-in configuration file reader. However, it is also possible for applications to redefine where the library gets it's configuration options from.The default configuration file
By default, the Cyrus SASL library reads it's options from /usr/lib/sasl/App.conf (where "App" is the application defined name of the application). For instance, Sendmail reads it's configuration from "/usr/lib/sasl/Sendmail.conf" and the sample server application included with the library looks in "/usr/lib/sasl/sample.conf".
A standard Cyrus SASL configuration file looks like:
srvtab: /var/app/srvtab
pwcheck_method: kerberos_v4Application configuration
Applications can redefine how the SASL library looks for configuration information. Check your application's documentation for specifics.For instance, Cyrus imapd reads its sasl options from it's own configuration file, /etc/imapd.conf, by prepending all SASL options with "sasl_": the SASL option "pwcheck_method" is set by changing "sasl_pwcheck_option" in /etc/imapd.conf. Check your application's documentation for more information.
Compiling and installing the library
Unfortunately, since SASL is very flexible (allowing administrators to upgrade and install new authentication plugins without recompiling any applications) its flexibility can also make it a chore to compile.I'll have some sage advice here when I find some.
Tuning the frequency of deferred mail delivery attempts
Tuning the frequency of deferred mail delivery attempts
When a Postfix delivery agent ( smtp(8), local(8), etc.) is unable to deliver a message it may blame the message itself, or it may blame the receiving party.
-
When the delivery agent blames the message, the queue manager gives the queue file a time stamp into the future, so it won't be looked at for a while. By default, the amount of time to cool down is the amount of time that has passed since the message arrived. This results in so-called exponential backoff behavior.
-
When the delivery agent blames the receiving party (for example a local recipient user, or a remote host), the queue manager not only advances the queue file time stamp, but also puts the receiving party on a "dead" list so that it will be skipped for some amount of time.
This process is governed by a bunch of little parameters.
- queue_run_delay (default: 1000 seconds)
- How often the queue manager scans the queue for deferred mail.
- minimal_backoff_time (default: 1000 seconds)
- The minimal amount of time a message won't be looked at, and the minimal amount of time to stay away from a "dead" destination.
- maximal_backoff_time (default: 4000 seconds)
- The maximal amount of time a message won't be looked at after a delivery failure.
- maximal_queue_lifetime (default: 5 days)
- How long a message stays in the queue before it is sent back as undeliverable. Specify 0 for mail that should be returned immediately after the first unsuccessful delivery attempt.
- bounce_queue_lifetime (default: 5 days, available with Postfix version 2.1 and later)
- How long a MAILER-DAEMON message stays in the queue before it is considered undeliverable. Specify 0 for mail that should be tried only once.
- qmgr_message_recipient_limit (default: 20000)
- The size of many in-memory queue manager data structures. Among others, this parameter limits the size of the short-term, in-memory list of "dead" destinations. Destinations that don't fit the list are not added.
IMPORTANT: If you increase the frequency of deferred mail delivery attempts, or if you flush the deferred mail queue frequently, then you may find that Postfix mail delivery performance actually becomes worse. The symptoms are as follows:
-
The active queue becomes saturated with mail that has delivery problems. New mail enters the active queue only when an old message is deferred. This is a slow process that usually requires timing out one or more SMTP connections.
-
All available Postfix delivery agents become occupied trying to connect to unreachable sites etc. New mail has to wait until a delivery agent becomes available. This is a slow process that usually requires timing out one or more SMTP connections.
When mail is being deferred frequently, fixing the problem is always better than increasing the frequency of delivery attempts. However, if you can control only the delivery attempt frequency, consider using a dedicated fallback_relay "graveyard" machine for bad destinations so that they do not ruin the performance of normal mail deliveries.
MY SQL replication
This tutorial describes how to set up database replication in MySQL. MySQL replication allows you to have an exact copy of a database from a master server on a second slave server, and all updates to the database on the master server are immediately replicated to the database on the slave server so that both databases are in sync. This is not a backup policy because an accidentally issued DELETE command will also be carried out on the slave, but replication will help protect against loss of data due to hardware failures.
I will be discussing how I set up replication on CentOS with MySQL 5. The actual replication setup is pretty platform independent, so it should work pretty much the same on any Linux distro. This setup has been tested on CentOS, Fedora, RHEL, Debian and Gentoo Linux. I decided to write this howto because after reading every other doc I could find, and discussing replication concepts with members of the MySQL team, we came to the conclusion that some aspects of the docs on the MySQL website are outdated and need to be updated. Before I go into detail about how I set up replication, let me give some background of why you should not follow the slightly outdated (at the time of this writing) information posted on the MySQL website. (Bug Id: 23615) Odds are good that by the time you read this, I will have submitted more current documentation to MySQL and their instructions will reflect these instructions. The MySQL site speaks of using binlog-do-db statements on the master. In most cases this does not do what you think it does. It also creates a replication environment where replication processing is performed on the master, and worse, has many reports of creating an incomplete binlog which is makes point in time (PIT) recovery difficult. An ideal replication architecture would allow a "lazy" master server, which only builds the binlog, and allows the slaves to handle the actual replication. The MySQL docs also make use of the replicate-do-db statement, this is also ill advised in favor of replicate-wild-do-table statements which allow for more efficient partial replication.
My setup involves one master with two slaves, the configuration of the slaves is identical with the exception of a different server-id. Replication will work with any database storage engine, InnoDB and MyISAM seem to be the favorites. Many like to use InnoDB for the master for added robustness and MyISAM for the slaves for added speed. This is really a matter of personal choice and if you opt for this option, you should consult the documentation on the MySQL site for replication options catering to InnoDB. On to setting up the master.
The first thing needed to prepare the master for replication is to enable networking if it is not already enabled. This can be done by making sure that the following lines are commented out, or non-existent.
#skip-networking
#bind-address = 127.0.0.1
Next, you have to designate a unique server-id. By convention the server-id for the master is 1.
server-id = 1
Finally, you have to set the pathway for the master to create its binary log. It is best practice to write the binlog to another filesystem than that which the tablespace files are located, this way the tablespace filesystem and the binlog filesystem can fail independently.
log-bin=/some/other/partition/
Other, optional statements that you should include is expire-logs-days, to set an expiration date for the binlog.
expire-logs-days=7
This will fix the binlog name to binlog.xxxxxx and will expire binlogs older than seven days automatically. These practices will ensure that the binlog on the master will be complete, and good for seven days backlog. After this, you may restart MySQL.
/etc/init.d/mysql restart
Now, we must log into the MySQL server as root to create a user with replication privileges.
mysql -u root -p
When you are presented with the MySQL shell, you must add a replication user account with replication slave privileges. It is best practice to have a dedicated account for replication, here we name this account 'repl'.
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%' IDENTIFIED BY 'some-password';
The above account will create an account where any slave can log into the master. This is most likely not what you want. However, you may wish to set it up initially this way to test the connection. After replication is working, you may wish to remove this account and reissue the grant statement to something like:
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%.foo.com' IDENTIFIED BY 'some-password';
To restrict slaves to a certain subdomain. After you have successfully created the account, you can execute the following to flush the privileges, the tables, and get some needed output for the slave.
FLUSH PRIVILEGES;
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
This should provide similar output
+---------------+----------+--
| File | Position | Binlog_do_db | Binlog_ignore_db |
+---------------+----------+--
| mysql-bin.009 | 205 | | |
+---------------+----------+--
1 row in set (0.00 sec)
This information is needed for the slave, so note it. You will notice the two right columns are blank, this shows that all of the replication logic is being performed on the slave side. This is ideal as it makes for easy configuration of slaves which may perform different actions on different databases or tables. Now, you can "quit" to leave the MySQL shell, and we will make a dump of the databases or tables that are to be replicated using mysqldump. On the master (which should have the current database to be replicated), create a dump file:
mysqldump --opt --verbose -p databasename > databasename.sql
You may wish to check the man page for mysqldump, in case special options apply. This should work in most cases. This will dump database 'databasename' to a dump file appropriately named 'databasename.sql'. You can scp this over to the slave machine so that we can dump it into a newly created database there. That finishes up our requirements on the master, you can log back in to the MySQL shell and release the lock.
mysql -u root -p
(enter your password)
UNLOCK TABLES;
quit;
Now, on to the slave. First we need to create the database that we will dump our dump file into. In this case, on the slave we would:
mysql -u root -p
(enter password)
CREATE DATABASE DATABASENAME;
quit;
Now dump your schema file that you copied from the master into your newly created database.
mysql -p databasename < /path/to/databasename.sql
You have to let the replication slave know that it is the replication slave. Specify an original server-id, along with the following options (with your values obviously) in the my.cnf file on the slave server. Remember, if you have more than one slave server, each server-id must be unique.
server-id=2
master-host=master.foo.com
master-port=3307
master-user=repl
master-password=some-password
master-connect-retry=60
In this example we are running MySQL on a non-standard port. If you want to run MySQL on the default port (3306) you can remove this line. You must also add a statement to tell the replicating slave which databases or tables to replicate. Prior this was done with the replicate-do-db statement, however, being new and improved we will use replicate-wild-do-table. To replicate the entire database 'databasename' add the following entry to my.cnf on the slave.
replicate-wild-do-table=
If you wish to specify a specific table, you can replace the wild (%) character with the table name. You should also create a directory to specify relay logs, make sure that the MySQL process has permissions to write to this directory. I created a directory called mysql-replication in /var/log (mkdir /var/log/mysql-replication) and set the ownership accordingly (chown -R mysql:mysql /var/log/mysql-replication). Then, inside of your my.cnf file add the following entries:
log-error = /var/log/mysqld.log
relay-log = /var/log/mysql-replication/
relay-log-info-file = /var/log/mysql-replication/rel
relay-log-index = /var/log/mysql-replication/
The first statement you may already have, this is just a standard log. Check to make sure that you have it in the my.cnf file somewhere, and add the proceeding statements to define the relay logs.
Then, restart MySQL.
/etc/init.d/mysql restart
If you are watching you logs, you will notice that the connection fails. This is because we still have not set the MySQL host value. Log into the MySQL shell on the slave and stop the slave replication IO thread.
mysql -u root -p
(enter password)
SLAVE STOP;
This will kill the replication thread and allow us to set the host. While inside the MySQL shell, pass the following command to the database server. Remember to substitute the values with those that you noted earlier.
CHANGE MASTER TO MASTER_HOST='master.foo.com', MASTER_USER='repl', MASTER_PASSWORD='some-
MASTER_HOST is the IP address or hostname of the master (in this example it is master.foo.com).
MASTER_USER is the user we granted replication privileges on the master.
MASTER_PASSWORD is the password of MASTER_USER on the master.
MASTER_LOG_FILE is the file MySQL gave back when you ran SHOW MASTER STATUS; on the master.
MASTER_LOG_POS is the position MySQL gave back when you ran SHOW MASTER STATUS; on the master.
MASTER_PORT is the port number that MySQL is running on. (optional and defaults to 3306).
Now, all that you have to do to start replication is to start the slave. Log back into the MySQL shell and issue the following command.
START SLAVE;
You will see in the logs that it syncs to the point in the binlog that you noted, and all future updates to the master will be replicated to the slave. Cool ha?
=============================
Saturday, November 29, 2008
My First post
My Name is Ashwin. I'm working as a Linux Administrator. Motive to create this blog is to discuss, help people working on all "nIX Platforms".
So Guy lets get together and make a vast kowledgebase.
Ashwin