[关闭]
@xtccc 2015-09-22T09:31:50.000000Z 字数 7509 阅读 6511

Installing Kerberos

给我写信
Github

此处输入图片的描述

Kerberos


参考文档:
+ Using Kerberos
+ Kerberos Infrastructure HOWTO
+ How to Install Kerberos 5 KDC Server on Linux for Authentication

环境配置

主机名 内网IP 角色
hadoop1.com 59.xxx.xxx.3 Master KDC
hadoop2.com 59.xxx.xxx.72 Kerberos client
hadoop3.com 59.xxx.xxx.73 Kerberos client
hadoop4.com 59.xxx.xxx.75 Kerberos client
hadoop5.com 59.xxx.xxx.76 Kerberos client



Installing Kerberos Package

  1. yum install krb5-server krb5-libs krb5-auth-dialog
  1. yum install krb5-workstation krb5-libs krb5-auth-dialog



Configuring a Kerberos Server

在配置Kerberos时,首先配置好master KDC,然后安装任意的secondary KDC server。

Configuring the Master KDC Server

  1. 确保所有的clients与servers之间的时间同步以及DNS正确解析。

  2. 选择一个主机来运行KDC,并在该主机上安装krb5-libs, krb5-server以及krb5-workstation

    1. [root@hadoop1 ~]# yum install krb5-libs krb5-server krb5-workstation

    KDC的主机必须非常自身安全,一般该主机只运行KDC程序。
    本文中我们选择hadoop1.com作为运行KDC的主机。

    在安装完上述的软件之后,会在KDC主机上生成配置文件/etc/krb5.conf/var/kerberos/krb5kdc/kdc.conf,它们分别反映了realm name 以及 domain-to-realm mappings。

  3. 配置 krb5.confkdc.conf

    我们对这两个模板文件稍加修改即可。如果想查询这两个文件的配置说明,可以参考man帮助文档,即man krb5.confman,还可以参考 Kerberos 配置

    /etc/krb5.conf的配置

    1. [logging]
    2. default = FILE:/var/log/krb5libs.log
    3. kdc = FILE:/var/log/krb5kdc.log
    4. admin_server = FILE:/var/log/kadmind.log
    5. [libdefaults]
    6. default_realm = GUIZHOU.COM
    7. dns_lookup_realm = false
    8. dns_lookup_kdc = false
    9. ticket_lifetime = 24h
    10. renew_lifetime = 7d
    11. forwardable = true
    12. [realms]
    13. GUIZHOU.COM = {
    14. kdc = hadoop1.com
    15. admin_server = hadoop1.com
    16. }
    17. [domain_realm]
    18. hadoop1.com = GUIZHOU.COM
    19. hadoop2.com = GUIZHOU.COM
    20. hadoop3.com = GUIZHOU.COM
    21. hadoop4.com = GUIZHOU.COM
    22. hadoop5.com = GUIZHOU.COM

    /var/kerberos/krb5kdc/kdc.conf 的配置

    1. [kdcdefaults]
    2. kdc_ports = 88
    3. kdc_tcp_ports = 88
    4. [realms]
    5. GUIZHOU.COM = {
    6. #master_key_type = aes256-cts
    7. acl_file = /var/kerberos/krb5kdc/kadm5.acl
    8. dict_file = /usr/share/dict/words
    9. admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
    10. supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
    11. }


  4. 创建/初始化Kerberos database

    1. [root@hadoop1 ~]# /usr/sbin/kdb5_util create -s

    其中,『-s』表示生成stash file,并在其中存储master server key(krb5kdc);还可以用『-r』来指定一个realm name —— 当krb5.conf中定义了多个realm时才是必要的。

    在此过程中,我们会输入database的管理密码。这里设置的密码一定要记住,如果忘记了,就无法管理Kerberos server。我们设置的密码是『KDC-DB-1234』。

    当Kerberos database创建好后,可以看到目录 /var/kerberos/krb5kdc 下生成了几个文件:

    kadm5.acl
    kdc.conf
    principal
    principal.kadm5
    principal.kadm5.lock
    principal.ok


  5. 添加database administrator

    我们需要为Kerberos database添加administrative principals (即能够管理database的principals) —— 至少要添加1个principal来使得Kerberos的管理进程kadmind能够在网络上与程序kadmin进行通讯。

    在maste KDC上执行:

    1. [root@hadoop1 ~]# /usr/sbin/kadmin.local -q "addprinc admin/admin"

    这里我们为其设置的密码是『db-admin-1234』。

    kadmin.local可以直接运行在master KDC上,而不需要首先通过Kerberos的认证,实际上它只需要对本地文件的读写权限。

    The kadmin utility communicates with the kadmind server over the network, and uses Kerberos to handle authentication. For this reason, the first principal must already exist before connecting to the server over the network to administer it. Create the first principal with the kadmin.local command, which is specifically designed to be used on the same host as the KDC and does not use Kerberos for authentication.


  6. 为database administrator 设置ACL权限

    在KDC上我们需要编辑acl文件来设置权限,该acl文件的默认路径是 /var/kerberos/krb5kdc/kadm5.acl(也可以在文件kdc.conf中修改)。Kerberos的kadmind daemon会使用该文件来管理对Kerberos database的访问权限。对于那些可能会对pincipal产生影响的操作,acl文件也能控制哪些principal能操作哪些其他pricipals。

    我们现在为administrator设置权限:将文件/var/kerberos/krb5kdc/kadm5.acl的内容编辑为

    1. */admin@GUIZHOU.COM *

    这表示: Any principal in the GUIZHOU.COM realm with an admin instance has all administrative privileges.


  7. 在master KDC上启动Kerberos daemons

    在KDC server上必须运行的daemons是krb5kdckadmin,它们可以被设置为自动启动:

    1. [root@hadoop1 ~]# /sbin/chkconfig krb5kdc on
    2. [root@hadoop1 ~]# /sbin/chkconfig kadmin on

    也可以手动地启动:

    1. [root@hadoop1 ~]# /etc/rc.d/init.d/krb5kdc start
    2. [root@hadoop1 ~]# /etc/rc.d/init.d/kamdin start

    OK,现在KDC已经在工作了。这两个daemons将会在后台运行,可以查看它们的日志文件(/var/log/krb5kdc.log 和 /var/log/kadmind.log)。

    可以通过命令kinit来检查这两个daemons是否正常工作。

    Verify that the KDC is issuing tickets. First, run kinit to obtain a ticket and store it in a credential cache file. Next, use klist to view the list of credentials in the cache and use kdestroy to destroy the cache and the credentials it contains.

    By default, kinit attempts to authenticate using the login user name of the account used when logging into the system (not the Kerberos server). If that user name does not correspond to a principal in the Kerberos database, kinit issues an error message. If that happens, supply kinit with the name of the correct principal as an argument on the command line (kinit).

    Once kadmind is started on the server, any user can access its services by running kadmin on any of the clients or servers in the realm. However, only users listed in the kadm5.acl file can modify the database in any way, except for changing their own passwords.

    1. [root@hadoop1 ~]# kinit admin/admin@GUIZHOU.COM
    2. Password for admin/admin@GUIZHOU.COM:
    3. [root@hadoop1 ~]# klist
    4. Ticket cache: FILE:/tmp/krb5cc_0
    5. Default principal: admin/admin@GUIZHOU.COM
    6. Valid starting Expires Service principal
    7. 09/18/15 10:14:33 09/19/15 10:14:33 krbtgt/GUIZHOU.COM@GUIZHOU.COM
    8. renew until 09/18/15 10:14:33
    9. [root@hadoop1 ~]# kdestroy
    10. [root@hadoop1 ~]# klist
    11. klist: No credentials cache found (ticket cache FILE:/tmp/krb5cc_0)

    注:以上几个命令,kinitklistkdestroy是在安装Kerberos client packages(即krb5-workstation)之后才存在的的。



Principal Creation

创建一个user principal

  1. [root@hadoop1 ~]# kadmin.local
  2. Authenticating as principal root/admin@GUIZHOU.COM with password.
  3. kadmin.local: addprinc xiaotao
  4. WARNING: no policy specified for xiaotao@GUIZHOU.COM; defaulting to no policy
  5. Enter password for principal "xiaotao@GUIZHOU.COM":
  6. Re-enter password for principal "xiaotao@GUIZHOU.COM":
  7. Principal "xiaotao@GUIZHOU.COM" created.

这里创建了一个名为『xiaotao』的user principal,其密码设置为『xiaotao-1234』。

通过命令listprincs可以看到当前已有的principals:

  1. kadmin.local: listprincs
  2. K/M@GUIZHOU.COM
  3. admin/admin@GUIZHOU.COM
  4. kadmin/admin@GUIZHOU.COM
  5. kadmin/changepw@GUIZHOU.COM
  6. kadmin/hadoop1.com@GUIZHOU.COM
  7. krbtgt/GUIZHOU.COM@GUIZHOU.COM
  8. xiaotao@GUIZHOU.COM




Client Configuration

在安装了Kerberos client package(krb5-workstation)之后,一个主机就可以向KDC发起Kerberos authentication。

我们在另外一台主机上(hadoop2.com)安装Keberos客户端。

  1. [root@hadoop2 ~]# yum install krb5-workstation

客户端安装好后,需要配置该主机上的配置文件 /etc/krb5.conf,这个文件的内容与KDC上的文件保持一致即可。

现在,我们在hadoop2.com上试图以之前创建的principal身份(即xiaotao@GUIZHOU.COM)来向KDC发起authentication request,并希望获得KDC颁发的TGT。

  1. [root@hadoop2 ~]# kinit xiaotao@GUIZHOU.COM
  2. Password for xiaotao@GUIZHOU.COM:
  3. [root@hadoop2 ~]# klist
  4. Ticket cache: FILE:/tmp/krb5cc_0
  5. Default principal: xiaotao@GUIZHOU.COM
  6. Valid starting Expires Service principal
  7. 09/18/15 10:30:42 09/19/15 10:30:42 krbtgt/GUIZHOU.COM@GUIZHOU.COM
  8. renew until 09/18/15 10:30:42

成功了!

klist will tell you under which principal you are currently authenticated to Kerberos, and if applicable, which and when you asked for a specific TGS.
Since we did not set up any service to use kerberos yet, you should not see any entry, except the TGT.



我们再用一个并不存在的principal(假设为『xt』)来试一试:

  1. [root@hadoop2 ~]# kinit xt
  2. kinit: Client not found in Kerberos database while getting initial credentials

果然失败了。




常见问题

1. 查看ticket是否是renewable

通过klist命令来查看

[hdfs@hadoop2 ~]$ klist
Ticket cache: FILE:/tmp/krb5cc_496
Default principal: hdfs@GUIZHOU.COM

Valid starting              Expires                       Service principal
09/18/15 22:56:28     09/19/15 22:56:28    krbtgt/GUIZHOU.COM@GUIZHOU.COM
                  renew until 09/18/15 22:56:28

如果Valid starting的值与renew until的值相同,则表示该principal的ticket 不是 renwable。
上面 hdfs principal 的ticket就不是renewable。


2. ticket无法更新

[hdfs@hadoop2 ~]$ kinit -R
kinit: Ticket expired while renewing credentials

这是因为krbtgt/GUIZHOU.COM@GUIZHOU.COM的『renewlife』被设置成了0,这一点可以通过『kadmin.local => getprinc krbtgt/GUIZHOU.COM@GUIZHOU.COM』看出来。


krbtgt/GUIZHOU.COM@GUIZHOU.COM的『renewlife』修改为7days即可,方法:

kadmin.local: modprinc -maxrenewlife 1week krbtgt/GUIZHOU.COM@GUIZHOU.COM



现在通过klist可以看出该principal的ticket是renewable:

[hdfs@hadoop1 ~]$ klist
Ticket cache: FILE:/tmp/krb5cc_1100
Default principal: hdfs@GUIZHOU.COM

Valid starting              Expires                         Service principal
09/21/15 10:52:40     09/22/15 10:52:40      krbtgt/GUIZHOU.COM@GUIZHOU.COM
                 renew until 09/28/15 10:52:34

参考: Re: Strange problem with ticket renewal




Kerberized Services

Host service

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注