On Thu, 17 Nov 2005 09:26:14 -0500, James Pifer wrote: > Hi. I have a server in our DMZ and I'm exporting a specific directory > with NFS. I have an internal server that I want to mount it on. The > internal server is allowed through the firewall without restriction. > Firewall guy tells me it's wide open for this internal server, TCP and > UDP. > > When I try to mount the drive I get this error: > pmap_getmaps rpc problem: RPC: Unable to receive; errno = Connection > reset by peer > > On the server running NFS I get this: > rpc.mountd: authenticated mount request from [internal_server]:680 > for /usr/test (/usr/test) > > If I do an nmap from the internal server to the external server running > I get: > > (The 1648 ports scanned but not shown below are in state: closed) > PORT STATE SERVICE > 22/tcp open ssh > 80/tcp open http > 111/tcp open rpcbind > 443/tcp open https > 933/tcp open unknown > 5001/tcp open commplex-link > 5801/tcp open vnc-http-1 > 5901/tcp open vnc-1 > 10000/tcp open snet-sensor-mgmt > > A UDP port scan seems to hang. > > If I do an rpcinfo on the external server running NFS I get: > # rpcinfo -p 127.0.0.1 > program vers proto port > 100000 2 tcp 111 portmapper > 100000 2 udp 111 portmapper > 100024 1 udp 32768 status > 100024 1 tcp 32768 status > 391002 2 tcp 32769 sgi_fam > 100011 1 udp 930 rquotad > 100011 2 udp 930 rquotad > 100011 1 tcp 933 rquotad > 100011 2 tcp 933 rquotad > 100003 2 udp 2049 nfs > 100003 3 udp 2049 nfs > 100021 1 udp 32781 nlockmgr > 100021 3 udp 32781 nlockmgr > 100021 4 udp 32781 nlockmgr > 100005 1 udp 32782 mountd > 100005 1 tcp 59483 mountd > 100005 2 udp 32782 mountd > 100005 2 tcp 59483 mountd > 100005 3 udp 32782 mountd > 100005 3 tcp 59483 mountd > > Any thoughts on what the problem is? > > Thanks, > James Besides the firewall, other things to check for are tcp wrappers (/etc/hosts.allow/deny - I once pulled hair over this one), and permisions of the partitions exported by the NFS server. On the client do a /usr/sbin/showmount -e nfs.server.com Whenever you modify something on the nfs server, run exportfs -r or restart the nfs server (better, because it restarts the rpc services too). Also, you're not root on the client when you're trying to access the exports, are you? By default, the nfs server does not treat a remote root user as its own root user, for the obvious reasons. So if you're root on the client and try to access an exported partition that belongs to, say, joe/users, you'll get an error. Also, the nfs server need not give unrestricted access to a client to access nfs. The problem with nfs and firewall is that the rpc services run on random ports, so the firewall would have to open the same (random) ports to allow access to nfs. Fortunately, nfsd can be configured so that the rpc services run on fixed ports, like so: On the nfs server: cat /etc/sysconfig/nfs STATD_PORT=4000 LOCKD_TCPPORT=4001 LOCKD_UDPPORT=4001 MOUNTD_PORT=4002 RQUOTAD_PORT=4003 Still on the nfs server, in /etc/sysconfig/iptables put these rules: -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 2049 -j ACCEPT-A RH-Firewall-1-INPUT -p udp -m state --state NEW -m udp --dport 2049 -j ACCEPT-A RH-Firewall-1-INPUT -p udp -m state --state NEW -m udp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m state --state NEW -m udp --dport 4000:4003 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 4000:4003 -j ACCEPT These will allow anything to access the nfs/rpc ports. To allow a only single machine, add its address to these rules.