We have Solaris and Linux in a network. Often we find problems with the Network File System (NFS). We are unable to use the Linux OS as a good file server.
Please recommend some tips for improving NFS performance.
The rsize and wsize mount options specify the size of chunks of data that both the client and server pass back and forth to each other. If the rsize and wsize options are not specified, the default varies by which version of NFS we are using. Check your defaults; they may be too big or too small. Other things to look at are your packet size and the amount of nfsd daemons currently running.
It is important to understand that NFS is both a client and server system. That said, both the client and servers need to be analyzed and possibly tweaked in an effort to improve performance. Before even getting to the areas that could be tweaked, I would recommend a bottom-up approach (From the OSI model) towards approaching the problem.
Let's look at the physical first, as nothing will make performance as bad as an improperly configured switch or network device. Are you using dumb hubs instead of switches? If so, shame on you. Collisions may be killing you. If you're using a fully configurable switch, make sure it is set for fixed rates (not auto-negotiate) at a minimum of 100mg per second, assuming both your NIC and switch support this. You will also need to make sure that your driver settings on your operating system reflect this rate.
Assuming your output of netstat and whatever else you use do not indicate any problems of this sort, you are ready to move on and attack some of the higher level (on the OSI model) application issues. Here's a nice link that I've seen floating about in several places that can help put you on the right track:
This was first published in August 2003