Installing Squid proxy server in Ubuntu

Squid is a proxy http server that speeds up getting pages from the internet by keeping copies of commonly accessed pages or graphics instead of downloading them each time. To install it:-

1. From a root terminal type apt-get install squid

2. Open gedit /etc/squid/squid.conf

3. Find the TAG: visible_hostname and after the comments section add visible_hostname <hostname> where <hostname> is your machine’s hostname.

4. Check http_port is either set to 3128 or a port number that you can remember for configuring your browser.

5. Close and save

6. Type adduser squid and specify a password

7. Restart squid by typing: /etc/init.d/squid restart

8. Stop the service by typing /etc/init.d/squid stop

9. Test it in debug mode by typing squid -z (which creates the cache files)

10. Type squid -NCd10 to test squid in debug mode and leave it running.

11. Open Firefox and type the URL localhost:3128 or whatever port you chose. It will fail to retrieve a page, but at the bottom it will confirm that the error is generated by squid.

12. Back at the Terminal type CTRL-C to cancel the debug mode

13. Start squid for real with /etc/init.d/squid start. It will start automatically from now on.

14. To configure Firefox to use squid, go to Edit>Preferences and click Advanced.

15. Click Network>Settings and then Manual Proxy Configuration. For http proxy, enter localhost and for port 3128 (or whichever port you chose).

16. Then click OK and close the Preferences dialogue.

17. Now go to any webpage. If you get the page, it’s working!

Powered by ScribeFire.


Partitioning, Mount Points and other Gems in Ubuntu 7.10

Power Cuts

Two days ago we had several power cuts that completely managed to scrag my hard drives in logan and cerebro (the Fileserver), Ho hum… Time for a re-install, I guess. Good job the data on the file server was on a separate hard drive. Having done some research since the first install, now might be the time to add some security to the systems by utilizing several partitions to protect the data. The idea being that if the system goes down I can work on that and configuration as well as user data remains safe.

Partitions? Why bother?

When Ubuntu installs, it sets itself up in one large partition and up to this point I have used the Guided – Use Entire Disk option. So why use separate partitions for some of the installation? Well, I figure there are several advantages. It goes like this:

Partition Use

/ The root (/) partition stores the core system files and apart from some small additions and re-compiles will remain relatively fixed. Being separate from everything else should give it extra security.

/usr This directory holds user tools, compilers and other stuff. This will surely grow as I add stuff and being separate will allow easier and more secure re-installs.

/var This directory holds the log files, spool files and other stuff that changes a lot. Giving it a partition all to itself, it means that a runaway system generating loads of data will fill this small partition up rather than taking over the whole system. There is a type of system attack that generates millions of log entries with the aim of toileting free space so a separate storage space for these files seems a really good idea.

/tmp Temporary files could also possibly grow beyond belief, so that same logic applies here.

Placing the home directories on their own partition prevents users from filling up the hard drive and enforces a primitive form of quota management. This will have to do until I can figure out how to get home directories on the server.

The Plan

Logan has a 250 Gb hard drive and that gives 236 Gb to Linux. During the installation process, I choose Manual rather than either of the Guided partitioning systems. The first step is to delete the suggested partitions before setting up my own plan.

/ 25 Gb
swap 3 Gb
/usr 50 Gb
/home 50 Gb
/var 50 Gb
/tmp 72 Gb

On reflection I might change the home directory to 72Gb and reduce /tmp to 50Gb.

Mount Points

As you set the size of a partition (25000 for 25 Gb, for example) the dialog asks for a mount point and doesn’t offer me any choices. A Mount Point is a directory in the file system where the new partition is going to live, so all I’ve got to do here is type in the directory names listed above for each partition.

All have been set up and I click the go button, the rest of the system install flawlessly. Brilliant!

Powered by ScribeFire.

Setting up NFS Server in Ubuntu

The idea of a NFS server appeals to me as not only will it allow native Linux shares (not relying on Samba) but will also allow me to centralise home directories and hopefully create true roaming profiles. Essential tools in a server’s armoury. I am working from tutorials found at ubuntuguide here and ubuntugeek here.

NFS Server

The first thing I need to do is set up the NFS server module on the server machine.

1. Type sudo apt-get install nfs-kernel-server

2. Type sudo dpkg-reconfigure portmap and it asks whether I want to bind to loopback and I select no. It then informs me I need to restart the service.

3. Restart using sudo /etc/init.d/portmap restart and it restarts beautifully.

Configure the NFS Server

1. As I don’t have a gui on the server, I’m going to try the work from Webmin.

2. I click on Networking>NFS Exports

3. I have no idea which NFS version I have installed so I’ll leave the 4 box ticked. Under Directory to export, I type /gary to try and make a network-wide home directory for myself. I assume I’ll have to make this directory manually, but I live in hope so I’ll continue with this export definition and see what happens.

4. I notice that It is Active and exported to Everyone – not sure about everything else in this category so I’ll leave it alone.

5. Under Export security, I tick No for Read-only (I want to be able to write to my home directory).

6. I don’t understand anything else so I’ll leave it all alone and click Create at the bottom of the screen.

7. As I guessed, It says directory does not exist. Ho hum. OK. I’ll go to File Manager and create it before trying again.

8. I create a new directory /export and then another in there, /gary. While I’m here I also create directories for my other users.

9. Back to NFS Exports and repeat 3-6. Mmmm – try something else, I delete the /export in NFSv4 Pseudofilesystem and add /export/gary to Directory to export. Check No for read-only and then click Create. This seems to work, the directory has gone green in file manager.

Installing the NFS Client

1. Back on my desktop, I open a terminal and type sudo apt-get install portmap nfs-common

2. Now to try and mount the share in my home directory. I type sudo mount /home/gary and it tells me permission denied. Drat!

3. I change the permission of the folder to 077 (all write) and the group to users and it still does not work.

Another Share method

From here, I’m going to try:

1. Go to File Manager in Webmin

2. Edit the file /etc/exports and add the line /export/files,no_root_squash,async)

3. This shows up in Webmin’s NFS Exports as Network as the Exported to.. entry.

4. Just going to create that directory first and then see if I can edit the /home/gary entry to reflect the network line.

5. Oh, and test it:)

6. Right, under the files Export tab, the Export to… field is set to IPv4 = and Netmask = 254. So, I’ll change to the same values for the home share. That wasn’t accepted as it said 254 isn’t a valid netmask (which it’s not) but that is what it shows for the files export .. huh? I’ll also turn off the Clients must be on secure port radio button.

7. Let’s go and have a look at /etc/exports again. I’ll manually change the line to match the ‘files’ line, i.e. add,no_root_squash,async) instead of (rw)

Finally! A Method that works!

I deleted all my previous attempts at adding Export shares under Webmin and went to create a new one with the following settings:-

1. NFS Version = 3
2. NFSv4 Pseudofilesystem.. = blank
3. Directory to export = /export/files
4. Active = Yes
5. Export to = IPv4 Network, IP =, Netmask =
6. Security level = None
7. Read-only = No
8. Clients on secure port = No
9. Disable subtree checking = No
10. Hide the filesystem = No
11. Immediately sync all writes = Yes
12. Trust remote users = Everyone

All other settings are default. Now when I sudo mount /home/gary/files – it works!

I have just found out that NFS doesn’t validate users, just the hostnames or IPs of the workstations connecting to the server, so quite how I’m going to use it to centralise home directories, I don’t know yet.

FTP onto the Ubuntu Gutsy Server

Getting an FTP client

Going to the Synaptic Package Manager on my Ubuntu desktop and typing ftp into the search bar, I find a popular little client called gFTP. I mark it for installation and click Apply. It downloads and installs successfully.

Flushed with optimism after the recent success with Apache, I launch the client and try to connect to the server using but the connection gets refused. Mmmm – either there is no ftp server installed on the server or I have not set up an ftp user.


I just noticed that in Others>Upload and Download there is a facility to upload files directly to the server. That’s pretty amazing. However it would be nice to have an FTP server running so that I can give other members of the family their own accounts and enable them to build web-pages. So I go to Servers>ProFTPD Server which looks like it might do the job and it tells me that it cannot find it but offers me a chance to download and install using APT. Now that sounds good. So I follow the click here link.


Yesterday, I navigated away from the Webmin install page while it was installing and the server’s file system corrupted. So now I am just going to be patient and reason that if there is a problem it will timeout and tell me. Still waiting…

30 Minutes later and still the same. I did navigate away and decided to try apt-get install proftpd from the CLI. However, it tells me that /var/cache/apt/archives/lock is locked and I cannot lock the download directory. Mmmm….. I have just used Webmin’s file manager to delete this file and will try again from the CLI. Now that seemed to work. I chose inetd as the ftp server type as I’m not anticipating heavy load and Webmin now shows me all the options.

System Logs

Apparently system log files can consume a lot of space, so I go to System>Log File Rotation>Edit Global Options and set Maximum size before rotating to 50M (Mb) and the Number of old logs to keep to 4.

ProFTPD Configuration

1.      Bases on the howto here, I used the file manager to add the line /bin/false to /etc/shells

2.      Then navigate to /home and check my public directory is there. It is, so no problem there.

3.      Then, using Webmin’s command shell,  I add another ftpuser (coz I don’t know how to add to the one I’ve already got) by

                useradd userftp -p <your_password> -d /home/public -s /bin/false

4.      I switch over to the server to type: passwd userftp to make sure the password has been set.

5.      Using Webmin’s file manager, I create directories called downloads and uploads in /home/public.

6.      Clicking on /home/public in the right pane, I can then click on the Info button and take write access off for users and groups, setting permissions to 0755.

7.      Then going to downloads I do the same (0755) and then uploads, I make sure that all access is enabled with 0777.

8.      Going back to Servers>ProFTPD I then click on the Edit Config Files button to make the following changes:

            UserAlias            gary userftp
            ServerType         standalone      (rather than inetd)
               ShowSymlinks    off
               TimeoutStalled   100
            TimeoutIdle        2200
               RootLogin          off                  (line added)
             #It’s better for debug to create log files 😉            (line added)
             ExtendedLog     /var/log/ftp.log                              (line added)
             TransferLog       /var/log/xferlog                             (line added)
             SystemLog         /var/log/syslog                             (line added)

(so I have to delete the log file lines found later)

             #I don’t choose to use /etc/ftpusers files               (line added)
             UseFtpUsers      off                                                (line added)
             MaxInstances     8

Uncomment PersistentPasswd so it reads ‘off’

             MaxClients          8                                                (line added)
             MaxClientsPerHost   8                                           (line added)
             MaxClientsPerUser   8                                           (line added)
             MaxHostsPerUser    8                                           (line added)

Then  added the whole of the next section:

# Display a message after a successful login
AccessGrantMsg “welcome !!!”
# This message is displayed for each access good or not
ServerIdent                  on       “you’re at home”

# Set /home/public directory as home directory
DefaultRoot /home/public
# Lock all the users in home directory, ***** really important *****
DefaultRoot ~

MaxLoginAttempts      5

<Limit LOGIN>
AllowUser userftp

<Directory /home/public>
Umask 022 022
AllowOverwrite off

<Directory /home/public/downloads/*>
Umask 022 022
AllowOverwrite off

<Directory> /home/public/uploads/>
Umask 022 022
AllowOverwrite on
    <Limit READ RMD DELE>

        <Limit STOR CWD MKD>

Controlling the Server

To start/stop/restart the server, I should be able to use:

            /etc/init.d/proftpd start
            /etc/init.d/proftpd stop
            /etc/init.d/proftpd restart

Now this all works. It’s not quite what I want, i.e. writing html pages to the webserver but it’s a start.

Powered by ScribeFire.

Installing Ubuntu Server 7.10 (Gutsy) on the Fileserver

Installing from CD

I downloaded the 7.10 Server CD and burnt it. Booting from it gave me a text based installation. All standard stuff. At the end of the installation I chose LAMP, SSH and Samba servers as additional modules to add on.

Testing the network

I then used ping to test the LAN and make sure that the Server was allocated an IP from the Router’s DHCP. ping showed the Router alive and ping showed Internet connection working.


I then used sudo apt-get update to update the package list and sudo apt-get upgrade to upgrade/update my installation. Only two packages were selected 🙂

All is going well so far, the next step is to get Webmin installed.


As I was going to be doing a lot of ‘root’ work, I set up a root password using sudo passwd root and then logged in as root. To prepare for installation of Webmin I had to download and install the required support libraries, which I did with:

apt-get install openssl libauthen-pam-erl libio-pty-perl libmd5-perl

Peace of cake. The library libnet-ssleay-perl is no longer available. What to do? I found here that the version should be 1.30-1 – so it should be available. A bit of googling later and I found a download link for this version. So I typed:


to download the packages and dpkg -i libnet-ssleay-perl_1.30-1_i386.deb to install it. Phew! That was close – didn’t think I’d get it going.

Then to get Webmin, I used:


To install Webmin, I used dpkg -i webmin_1.350_all.deb and, as promised, it said:

Webmin install complete. You can now login to https://Server:10000/ as root with your root password, or as any user who can use sudo to run commands as root.

“Server” is the name that I chose to call the server – now nobody said that I was imaginative! 😉 Then, I noticed from the Webmin site that the latest version was 1.370 (d’oh!) so I am going to have to update it once I get in. I should have downloaded webmin_1.370_all.deb instead.

Anyway, trying to access Webmin across the LAN using https://server:10000 failed with a ‘server not found’ message. Wonder what the IP of the server is?

Hmmm… A small bit of research later and I am no wiser as to what linux CLI command will tell me! Ho hum. Ah yes – log on to the Router and look in the DHCP client list to see which IP has been allocated to Server. Got it – it’s

Updating Webmin

Typing into a browser (Firefox) takes me to the login page and I can log straight into Webmin – yay! Now this is already one step further than before with 7.04 Feisty. Webmin opens up with some system details and a menu to the left. Clicking on Webmin in that menu and then Webmin Configuration I can choose Upgrade Webmin (Webmin>Webmin Configuration>Upgrade Webmin). I leave the “Latest version from; selected and click the Upgrade Webmin button. It downloads and installs successfully. At the bottom of the page Webmin tells me that there are 1 updates for this version and I follow the click here link to download it. It is the acl (Access Control List) module and it donwloads and installs flawlessly. Now that was easy. Just to check, I click on System Information in the main menu and it tells me that I am indeed running 1.370. Success.

Samba file sharing

1. I go to Servers > Samba Windows File Sharing.

2. Then, I click on Create a New File Share and fill in the details as follows: Share name = public, Home Directories Share = unselected, Directory to share = /home/public, Automatically create directory = yes, Create with owner = root, Available = yes, Browseable = yes, Share comment = Fileserver stuff.

3. To make sure everybody has got permissions to this folder/share, I click on Others>File Manager and navigate to /home. Damn, Firefox tells me I have to download a plugin, now I know have Flash installed so I am guessing it is either Shockwave or Java, prob. Java. OK, let’s do it. Yup – it was Java.

4. I click on the Info button and check Read, Write and List boxes are ticked on User, Group and Other columns.

5. Lastly, I’ve got to make sure the configuration file is saying the right things. So I navigate to /etc/samba and click on smb.conf and then Edit. A little Java applet window pops up and I can edit the file here – cool, huh? So, what am I looking for? Looking for ; security = user I can now change it to ; security = share.

6. Now I can scroll down to the end of the file, and change:

comment = Fileserver stuff
path = /home/public

to be:

		comment = public		
		path = /home/public		
		public = yes		
		writable = yes		
		create mask = 0777		
		directory mask = 0777		
		force user = nobody		
		force group = nogroup

I then click on the Save & Close button. Phew - nearly there.

Windows Workgroup

Right. To check I am going to be part of the same Workgroup as trhe rest of my LAN, I click on Servers>Samba Windows File Sharing>Windows Networking in the Global Configuration section. Here I can set the Workgroup to MSHOME and click Save. That should be it – now to test the share from Windows…

Nah – it is asking for user name and password to get into the share and it shouldn’t do that. Interestingly, from another Linux box I can write to this directory – no problem. Hmm a small problem to solve. I shall be back.

Solution Found

Another dumb mistake easily solved. In the instructions above, you should notice a line in the smb.conf file that now reads ” ; security = share “. Remove the semi-colon at the front of the line! It all works beautifully now.

Job done. 🙂