Friday, 28 August 2009

Linux network setup

Because I always forget the syntax of this file....

The file controlling network IP address settings in linux is "/etc/network/interfaces" on Debian based systems - to set static IP addresses use the syntax below (first example is IPv6, next is standard IPv4)

iface eth0 inet6 static
address 2001:470:1f09:12e9::<host address>
netmask 64
gateway 2001:470:1f09:12e9::1 (or whatever your gateway is)

iface eth0 inet static
address 192.168.0.5
netmask 255.255.255.0
gateway 192.168.0.1

Force a restart of the network with:
#ifup --force eth0

Adding a route for IPv6:
#route -A inet6 add 2001:470:1f09:12e9::/64 gw 2001:470:1f09:12e9::200
Another way to add a new route to linux that routes everything via 2001:470:96fb::1 (replace "add" with "del" to delete)
# /sbin/ip -6 route add ::/0 via 2001:470:96fb::1
Show current IPv6 routes:
# route -n -A inet6

The server used for resolving DNS is set in /etc.resolv.conf

Thursday, 27 August 2009

Apache SSL setup

Generate CSR
A CSR needs to be sent to the CA (Certificate Authority - Verisign, Thawte, etc) for them to sign the key. Once it is signed you can import it into Apache to allow pages to be encrypted without the user being warned about  dodgy SSL certificate.  First you need to create a key file for the server
# openssl genrsa -out server.name.com.key 1024
Next you need to use this key to create the CSR
# openssl req -new -key server.name.com.key -out server.name.com.csr
You should be able to send the *.CSR file to a CA for them to sign using their "buy certificate" pages.

When you get the certificate back signed run a2enmod ssl to enable ssl in apache then edit the site in sites-enabled entering the correct key and cert files. Restart apache and it should be working.

Sunday, 9 August 2009

Desktop software loadout

Stuff I use....

  • Firefox web browser. I normally also load in most of these addons (nothing fancy)
    • Autopager (no more next page, next page, next page....)
    • Delicious (Bookmark manager - Seperate tags for the main work/home toolbars, but all loaded in the same account)
    • Download Them All (downloader, supports multi thread downloads, resumes etc)
    • Download Statusbar (minimalistic downloading, not using so much any more)
    • FireGPG - GPG encrypting plugin
  • Encryption
    • GPG
    • Seahorse for GPG key management (linux)
  • Skype
    • Shame it looks rubbish on Linux and they use a protocol that nobody else can link into! Rumors of Skype being released open source soon (heard Dec 2009)
  • f-spot (Linux)
    • Photo management and upload to Flickr
  • Dropbox
    • Paid for extra nice 50GB account and save all my photos etc online. Easy to access from anywhere and very nice sync between computers. Now has selective sync which is extra nice for laptops with small hard drives.
  • Shrewsoft VPN (Linux & Windows)
    • VPN client that connects to our Netscreen SSG 140 at work.
  • Multimon taskbar (Win) - shows only the programs on current screen on the taskbar
  • Spacesniffer for analyzing hard drive space used
  • Cropper for screenshots

Tuesday, 7 July 2009

LVM Logical volume management

Physical Volumes
Physical volumes are effectively the same as partitions on the physical hard disk.  To create a physical volume out of the partition:
# pvcreate /dev/sdb3
Volume Groups
Groups of one or more physical volumes that contain volume groups. This allows a logical volume to be spread over several partitions etc.  vgdisplay shows the current volume groups and vgextend is used to add another physical volume to a volume group
# vgdisplay
# vgextend <volume group name> <physical volume name>
Logical Volumes
The "end user" partitions.
# lvdisplay
# lvextend -L23GB /dev/VolGroup00/LogVol00 <— expand logical volume to 23GB
Resize partition on logical volumeIf the logical volume has been increased then to expand the file system to use the whole logical volume run the following command:
# ext2resize /dev/VolGroup00/LogVol00
Activate LVM under live CD
A good walkthrough to activating LVM under Knoppix (hard to change if using the partition) can be found here: http://linuxwave.blogspot.com/2007/11/mounting-lvm-disk-using-ubuntu-livecd.html. In short:
Boot using the live cd.
# apt-get install lvm2 (already installed in knoppix 5.1.1)
# pvscan
PV /dev/sda2 VG VolGroup00 lvm2 [74.41 GB / 32.00 MB free]
Total: 1 [74.41 GB] / in use: 1 [74.41 GB] / in no VG: 0 [0 ]

# vgscan
Reading all physical volumes. This may take a while...
Found volume group "VolGroup00" using metadata type lvm2

# vgchange -a y
2 logical volume(s) in volume group "VolGroup00" now active

# lvscan
ACTIVE '/dev/VolGroup00/LogVol00' [72.44 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol01' [1.94 GB] inherit

# mount /dev/VolGroup00/LogVol00 /mnt

Wednesday, 1 July 2009

VMWare ESXi 3.5 and 4

Introduction
First (and possibly most important) I would like to say that all of the software for getting this going was free of charge, even for commercial use as far as I am aware. VMWare's logic is that after deploying you will want to either use their advanced features (big £) or you will need support (big £). I'd say it has worked becase I love this stuff now and the odds are high that we will reach a point in the next few years where we decide to upgrade for more features.  Either way you can play with VMWare for free and see if it works for you, even go live with production servers without paying a penny (appart from Hardware).

Update Feb 2010: Yep, just purchased a 3 year VMWare Essentials bundle. Guess it worked!

Whilst we have been doing daily backups to tape of our database systems for ages (as all good IT people should!) there are two problems with this.

The backups are done nightly. If something happened at 6pm then all of that days work would be lost.
The backups only cover the raw data. If the server died then it would take quite a while to reinstall all of the systems and settings.
When a server became available recently I decided to push through with backups as virtual machines. This server was suitably powerfull (Quad core 3GHz CPU with 8GB of ram (since upgraded to 16GB _just in case_ )

My plan for the first phase was to run backup servers in the VMWare environment. Any time I wanted to take a copy offsite I could then copy the virtual machine from VMWare to some other media and go. To restore I could just copy the files to any VMWare server and power it up. data would only be up to the point I did the backup but I could then use the nightly tape backups to bring it more up to date.

The second phase was to start replicating changes automatically from the live database server to the backup database server. This would give me an 99.9% up to date copy of the DB system ready to become live if the live database server died. (Please remember this is not a proper backup - if the live system deletes half its customer records the replication will do this on teh "backup" system too...)

Third phase was moving the live servers into VMWare once I was happy that the performance of the backups was adequate. Quite possibly this would just involve migrating from existing hardware to existing hardware running VMware, with an iSCSI drive cage for files. I know the hardware is powerful enough now so it will be (with minor VMWare overheads) then. Also this opens up options with VMWare infrastructure like vmotion (moving between physical servers) which would make upgrades etc easier, failover etc. All this is great stuff but expensive and I'm doing this one step at a time. The things I can use it for will keep improving and changing so I'm not going to spend too much time planning now. :)

Installing VMWare
First since its all free you might as well get the latest version of VMWare ESXi from the vmware website. Also get the licence key as otherwise the system will expire after 60 days. With the free key its an unlimited licence (in terms of time - some reporting and remote connection features are limited)

When you install VMWare ESXi it will completly overwrite the hard drive of the server. The install itself is fairly easy as long as the hardware is supported. If not then you will probably be in trouble. There are lists of known working hardware available online, but many systems not on the compatible list also work.

When its installed all you get is a yellow and grey screen with very few options. Use these screens to setup the root password, network and make a note of the IP address. Browse to this ip in a web browser and you will get a link to download the VSphere client which is how you manage everything.

Install the client on any Windows desktop and run it to connect.

Deploying to VMWare
There were basically two options for creating systems on VMWare.

First and most obvious in install from scratch in a virtual machine. This may well be the best solution if you have a slightly old install anyway - you don't want to take all the old temp files etc over if you can avoid it.

Convert an existing server to a virtual server. The VMWare convertor runs suprisingly well and you end up with a clone of the physical machine which can run inside VMWare. It even sorts out new partitions, drivers etc.

Since the main use of this VMWare server was as a backup server and I did not really want to pay for RedHat support for backup machines I decided to create these servers running CentOS as it is binary compatible to RedHat. The servers installed flawlessly after I had remembered to enable the CDRom drive and uploaded the CentOS install ISO to the datastore (you can also use local ISO files on your desktop or you local physical CD rom drive but the datastore is a bit faster and I wanted a copy available for next time). Apart from the fact that it was all happening in a window it was exactly the same as on a real computer.

For the convert I was going from a RedHat ES version 5.1 server. It took a while but the new system worked on first boot with the exception of networking - because the Mac address had changed Redhat setup a new default config and it was trying to use dhcp. As RedHat keeps the old config it was easy enough to copy the old settings back in and restart networking.

Backing up
The easiest way to backup a virtual machine is to turn it off, then use the datastore browser to download all the files before turning it back on again. This gives you a clean full backup with no complications. However this is not easy to automate.
Note: If you have a thin disc then the datastore will show the space used, but when you download it will convert to a "fat" disk. EG a 200GB thin disk that shows as 20GB in the datastore will become 200GB as it is downloaded. The datastore will still have the 20GB thin disk but it will take ages to download.

I wanted backups with as little downtime as possible. I know of the following ways to achieve this:

Buy the extended features/3rd party programs
Use snapshots whilst the machine is shut down
Use snapshots whilst the machine is running (needs more drive space as the entire memory seems to be dumped to disk even if only a small amount is actually used. Also takes longer to create snapshots because of this)

Backup Snapshots with a virtual machine off
The basic idea here is to turn off the virtual machine, take a snapshot and then turn the machine on. This should only take a few minutes and you can then copy the pre-snapshot fields at your leasure. It is assumed here that you do not actually have any pre existing snapshots. If you do then it is still possible but I'll leave working out the differences to the reader. Running with snapshots long term is generally a bad idea as it slows things down a bit (system has to check the snapshot then the main file for things) and can use lots more disk space depending on how many fiels change in the VM.

  1. Turn off virtual machine
  2. Backup the <vmname>.vmx file
  3. Take a snapshot
  4. restart virtual machine
  5. Backup the <vmname>.vmdk file
  6. Place all the backed up files in a new directory
  7. Add the VM to the inventory (right click on the .vmx file in the datastore)
This works because the machine is off when you create the snapshot and therefore the snapshot file (<vmname>-000001.vmdk) contains everything that happens AFTER you turn it on. Because the <vmname>.vmdk is no longer being written to it is possible to copy it without corruption. These are the only two files you need, all the rest are generated on the fly.  By doing it this way downtime is small as you can turn the system back on before starting to copy the .vmdk hard disk file (the big one).

Backup Snapshots with virtual machines on (gulp)
If zero downtime is your aim (its what I'd like) then you should really buy the extended features and do this properly. My current practice is as follows.
  1. Make a snapshot with memory (this is what we will restore to later)
  2. Backup the vmx file (virtual machine config file)
  3. Backup the vmsd file (Snapshot index file)
  4. Make another snapshot without memory (only needed to stop the system writing to first snapshot with memory)
  5. Backup the <vmname>.vmdk
  6. Backup the <vmname>-000001.vmdk (post first snapshot file - not used but needed to have something to roll back from)
  7. Backup the <vmname>-Snapshot<n>.vmsn (first memory snapshot so the lower of the two values for <n> - its also a lot bigger (size of memory rather than a few kB) )
  8. Add the VM to the inventory (right click on the .vmx file in the datastore)
  9. Select the VM and choose to rollback with the snapshots.
The virtual machine will be magically on in exactly the state it was in when you took the first snapshot! You do not even need to power it up as when the snapshot was taken it was powered on, so when restored it is still powered on. Apart from a computer which is possibly confused about how the time has changed so much its all good to go :)

Note that this is NOT the way that VMware expects you to do backups, involves a ot of fiddly steps and is liable to be broken by an update from VMware any day.  If you can avoid it try to find another way to do this - I like it because the systems I'm working with are not essential and I can't justify the budget to do this properly yet.  As soon as I can I'm buying an off the shelf program to do this.

Sunday, 21 June 2009

Using dd for a secure disk wipe

The best tool I know of for wiping disks is DBAN which boots from CD and can wipe all attached hard drive to MOD standards. If you are serious about wiping your data forever then I strongly recomment you look to DBAN.  Or take the drive apart and physically destroy the platters.  For a quicker homemade version read on.  Bear in mind that a small mistake here can COMPLETLY AND IRRETREVABLY DESTROY ALL YOUR DATA! You have been warned.


UPDATE: This is unlikley to be as effective on newer SSD based hard drives due to their wear leveling technology.

To wipe a disk (eg sda) use the following (multiple times if paranoid):
# dd if=/dev/urandom of=/dev/sda ; sync
Or a quicker version which is not as secure but should be fine unless you have pissed off GCHQ or similar (in which case its probably already to late anyway):
# dd if=/dev/zero of=/dev/sda ; sync
To determine current progress find the PID of the dd process then pass it a command using kill:
# ps -A | grep dd
# kill -USR1 <pid>

Monday, 15 June 2009

NTP time setup on Windows

Main ntp server I use: 0.uk.pool.ntp.org  To find a suitable server pool for your location visit http://www.pool.ntp.org/en/ and browse teh active servers (on the right)

To update the time on a domain computer from the PDC use the following command (can take a few minutes to actually take effect):
# w32tm /resync
To identify the current time server:
# net time /querysntp
To set the time server:
# w32tm /config /manualpeerlist:<server name> /syncfromflags:manual /reliable:yes /update
To reset the ntp server to the domain controller:
# w32tm /config /syncfromflags:domhier /update
To report on time difference between current computer and server:
# w32tm /stripchart /computer:<servername> /samples:5 /dataonly

Wednesday, 10 June 2009

Runlevels and services in linux

Redhat
To list all services:
# chkconfig --list
To turn on/off a service
# chkconfig [service] [on|off]'
Debian
To start automatically on boot:
# update-rc.d [service] defaults

Monday, 8 June 2009

Linux local user account password policies

Setup a password policy
NOTE: Tested on Redhat ES5 only so far, should work on all Linux
This will setup a password policy of a min 8 char with at least one each of [uppercase|lowercase|numbers|symbols]. Passwords expire after 90 days and if not reset within a further 7 days the account is inactivated. Warnings are issues from 7 days before password expiry.  This is all for local accounts only, not domain accounts.

Edit /etc/pam.d/system-auth
Edit the password line so it looks as follows:
password requisite pam_cracklib.so retry=3 minlen=8 ucredit=-1 lcredit=-1 dcredit=-1 ocredit=-1
ucredit is uppercase, lcredit lowercase, dcredit didget (numbers) and ocredit is other (symbols). By having a negative number you are requiring at least that many of each type.

Edit /etc/login.defs
Set the following values, all fairly self explanatory
PASS_MAX_DAYS 90
PASS_MIN_DAYS 0
PASS_MIN_LEN 8
PASS_WARN_AGE 7
Edit /etc/default/useradd
Sets how long after password expires before locking the account
INACTIVE=7
Enable password history to prevent reuse
These 3 steps will probably have been done by system install, check anyway.
# touch /etc/security/opasswd
# chown root:root /etc/security/opasswd
# chmod 600 /etc/security/opasswd
Edit /etc/pam.d/system-auth and add to the line as shown below:
password sufficient pam_unix.so nullok use_authtok md5 shadow remember=24
Lock account after set number of invalid passwords
# touch /var/log/faillog
# chown root:root /var/log/faillog
# chmod 600 /var/log/faillog
Edit /etc/pam.d/system-auth.
Directly under "auth required... pam_env.so" add:
auth required pam_tally.so onerr=fail deny=6 no_magic_root unlock_time=1800
Directly under last "account required" add:
account required pam_tally.so no_magic_root
Manual commands
Manually set for pre existing user accounts
To show current settings on account:
# chage -l [username]
For pre existing users you need to manually update their security token for this to take effect
# chage -m0 -M90 -I7 -W7 [username]
m=min age, M=max age, I=inactive after period, W=warn period before expire, all in days

Force expire a users password
To force a password to expire and therefore require a new password from them on next login (useful with new accounts where you want the user to change the password to something they will remember immediately):
# chage -d0 [username]
Show a count of all failures
# faillog -a
To clear failed passwords for a user
# faillog -u [username] -r

GPG - Public private key encryption

GPG is a good tool for encrypting with public private key cryptography. I tend to create a signing key with no expiry and then add encryption keys which expire after one year. Adding additional encryption keys is done using the addkey command.  Many people either create a whole key pair that never expires (not quite as secure) or create a whole new key pair every year (paranoid?).  As ever the less often you change things the less secure but unless you are encrypting military secrets or similar GPG is probably more than adequate.

Useage

Generate new key
# gpg --gen-key
To list keys
# gpg --list-keys
Using keyserversTo find a key on a keyserver use the following command. A list of all matching keys will be displayed along with the ability to select which you wish to import
# gpg --search-keys 'domain.com'
To update keys on your keyring to the latest version on the keyservers run the following command:
# gpg --refresh-keys'
Edit a GPG key
# gpg --edit-key 01234567
->Trust (sets the trust level on a key)
->lsign (signs locally - will not export to key serveers)
->quit

To encrypt a file
# gpg -e -r [email protected] [file]
--batch - batch mode, will not prompt for anything, will just work or fail
--armor - ASCI armour the file (use only "normal" chars, less likely to be corrupted by a system which tries to interpret, makes resultant file bigger)
--always-trust - automatically trust recipients for this encryption. Useful for eg scripts where you do not want to have to create a private key and sign the recipient keys, and don't want to hit "y" to override this check each time.

To decrypt a file
# gpg -d [file]
To setup GPG for automatic encryption of a file with cron first install GPG and import the keys we need from keyservers:
$sudo yum install gpg
$sudo -H gpg --keyserver keyserver.ubuntu.com --search-keys <name_of_person_or_company>

The -H is required for sudo to use the root $home otherwise it tries to use the current user $home and fails with bad permissions. Import any keys you need and repeat for all required keys.
Now you can run the following in root's crontab inside a script to encrypt a file using GPG:
gpg --batch --always-trust -e -r email.of@recieipent <file to encrypt>

Tuesday, 2 June 2009

Identify processes running on a network port

If you want to check what process is using one of your network ports a good utility to use is netstat:

[root@sideshow ~]# netstat -natup | grep 161
udp 0 0 0.0.0.0:161 0.0.0.0:* 9580/snmpd
In this example I checked port 161 and found that snmpd (a network monitoring program) was running on the port.  You could also grep for "apache" to see what ports apache is listening on.

Exchange 2003 SSL with different local and FQDN

If using a different SSL fqdn for external access compared to internal hostname then it will not be possible to edit settings in Exchange System Manager, failing with the error c103b404. Editing IIS exchadmin & public folders should fix but often does not. You can manually force this fix using ADSIEDIT mmc plugin, browse to config setting & remove :443:.
Full instructions in experts exchange question 22939627

SSH Tunnels

SSH tunnels can provide encrypted tunnels over the internet and also tunnel through many firewalls - effectively it is a VPN.

Options:
-R - connections to remote server will be tunneled
-2 - Use ssh version 2 (more secure)
-NX - Don't execute any remote commands, just create tunnel
-f - Background process
-C - use compression
# ssh -R 1234:local-server:80 username@remote-server
With the above command anyone trying to go to remote-server:1234 will be redirected to local-server:80 as seen from the machine initiating the ssh connection.

Autossh
Autossh can be used to maintain a ssh tunnel, bringing it back up automatically if it fails.

Options as above with addition of:
-M - monitoring port to use
# autossh -2 -fN -M 2000 -R1234:localhost:80 user@remote-server

SSH auto login key generation

To generate a key for SSH login purposes run the following on the computer you want to login FROM.  The key files will be created in the users ~/.ssh/ directory.
# ssh-keygen -t rsa (Could also use -t dsa)
Enter a password if required, if intending to use for an automated account then leave password blank (in which case I STRONGLY recommend you also restrict the command as shown further down). Copy file *.pub to remote server and then add the key to the authorized keys file of the remote user.  Also check permissions are set correctly or it will not work.
# cat *.pub >> ~/.ssh/authorized_keys2
# chmod 600 authorized_keys2
Login with the following to see verbose errors etc:
# ssh -v user@remotehost
It is possible to restrict the command which can be run with this type of connection to increase security. This allows us to only allow a specific rsync command for example so if the key is compromised it will limit the damage that can be done. To enforce this edit the beginning of the authorized_keys file as shown below, replacing with the command you want to be run and the correct key.
command="rsync --server -vlogDtprc . /var/www/html" ssh-dss AAAAB3Nz…[rest of key goes here]
See also: SSH tunnels post

Troubleshooting:
Check the file permissions of all the key files (and directories) as if incorrect it will fail with unhelpfull error message.
Note that password expiry policies still apply even if only using keys. The account will still block logins and enforce a password change even if you are trying to login with a key file - not what you want to happen on an automated account...
Check the spelling of the file "authorized_keys" - the number of times I have typed "authorised_keys" by mistake....

Monday, 1 June 2009

Redhat useful commands

To find installed version of a package
# rpm -qa | grep [package]
Configure networking on RedHat
#/usr/sbin/system-config-network
Configure firewall and SELinux on RedHat
#/usr/sbin/system-config-securitylevel-tui

Red Hat ES4
# rpm -e [packagename] -> Uninstall package
# up2date --nox -u -> Updates all packages
# up2date --nox -i [package_name] -> Installs package
# up2date --nox -d [package_name] -> Downloads package only
# up2date --showall | grep [package_name] -> Shows versions of packages available
# up2date --configure --nox -> Configure settings eg skip lists
Red Hat ES5
# yum list updates
# yum update
# yum list firefo*
# yum install [package]
# yum remove [package]

Thursday, 28 May 2009

Defragment Exchange 2003 datastore

Over time the mail store for Exchange 2003 becomes fragmented. To resolve this the database needs to be taken offline which effectivley kills Exchange, then eseutil needs to be run on the mailstore files. After unmounting the mailboxes it is a good idea to make a backup of the mailstore just in case... for this to work free space of 110% of the mailstore is required. The *.stm file is automatically defragmented when you defrag the related *.edb file.

  1. In Exchange System Manager, right-click the information store that you want to defragment, and then click Dismount Store. (you probably want to tell people the server will be offline for several hours depending on the size of the datastore and speed of the server)
  2. Make a backup copy of the files in the mailstore directory somewhere safe!
  3. C:\program files\exchsrvr\bin> eseutil /d c:\progra~1\exchsrvr\mdbdata\priv1.edb

Note: eseutil.exe will create the temp files in the current working directory, make sure you have 110% of the file size you are working on as free space in this partition.
Note: On our server the pub1.edb (1.7GB) took 5 minutes to defrag and the priv1.edb (31GB) took about 2 hours

Wednesday, 27 May 2009

MiBew (AKA WebIM) Web chat system

Nice little chat program, usefull on web sites "Talk to an operator". Now renamed to Mibew (Webim backwards...)  Operates in a way similar to the "talk to an operator" popups on web sites etc.

We have been using this on our web site for a year or so now and its proved to be very reliable and easy to use.

http://openwebim.org/

Packages required on Debian 5.0
apache2 php5 php5-mysql mysql-server mysql-client

Linux user account setup (basic)

Passwords
To change a password for another user:
# passwd [username]
To lock an account and unlock an account:
# passwd -l [username]
# passwd -u [username]
WARNING: If Locking the root account ensure that you have allowed and tested access with sudo first!
Users and Groups Creation and DeletionTo list the groups a user is a member of:
# groups [username]
Groups:
# groupadd [groupname]
To create a user and set primary group:
# useradd -g [groupname] [username]
To create a user and set secondary group:
# useradd -G [groupname] [username]
To add group to existing users secondary groups:
# usermod -a -G [groupname] [username]
To delete a user:
#userdel -r [username]
-r deletes the users home directory as well as account

Misc
To see a summary of logins to date (leave username blank to see all logins):
# last [username]

Sudo setup

To use sudo setup users and groups using normal Linux then use the visudo command to edit the configuration file.

To allow members of the group to run any command as themselves:
%groupname ALL= ALL
To allow members of the group to run any command as any user with the exception of /usr/su:
%groupname ALL=(ALL) ALL, !/usr/su
Now prepend any command with sudo and it will run as root. If you forget you can rerun the last command with !! eg:
# visudo
Error, permission denied
# sudo !!
runs visudo as root

To run a command as a specific user:
# sudo -u <username> <command>

Tuesday, 26 May 2009

Use runas to access control panel on XP

NOTE:  This is less likely to work as the more patches are applied to Windows XP the more the base system changes and this executable may no longer be compatible (if its even there!)

With administrator access denied to Windows XP users it can be hard to maintain their desktops without logging out and in repeatedly. Previously it was possible to runas iexplorer.exe however this ability was blocked around XP SP2. On older XP machines however a backup of the IE6 exe is available (although it could be buggy as its not intended to be used).
Use the runas command on c:\windows\ie7\iexplore.exe, change the address to c:\ and hit the folders button. You now have a old style explorer window running as an administrator.  You can now access files, control panel etc.

Copy files to all user desktop over network

To copy a file to the all user desktop of a machine over the network use the following command:

xcopy /O testfile.txt "\\ws123\c$\Documents and Settings\All Users\Desktop"

/O - preserve permissions
/S - include sub directories and files

can not xcopy direct to \\machine\c$ - need to xcopy into a folder.

Give full access to the "domain user" group first so the end user can access the file. The xcopy command supports tab completion over the network which makes this easier.

Date and time stamped in linux history

To add the date and time to the history add the following two lines to /etc/profile:

HISTTIMEFORMAT="%D %T"
export HISTTIMEFORMAT

Now the history command will also display when the commands were run, very useful for audit purposes.

Quick remote backup over ssh

To create a quick backup from one linux box to another the following command can be used (it will ask for your password on the remote machine unless you have setup ssh keys):

$ tar zcvf - /path/to/files | ssh user@remote-server "cat > backup.tgz"

Welcome!

I work as an IT Manager for a small (~50 people) company selling holidays. Because we are a small company and try to do most things in house I end up getting involved with everything from end user taining through VB macro writing in Excel to server and database installation and maintenance which is great fun. Trying to remember the obscure setting or program I last used 9 months ago is not so much fun which is why I try to make notes in this blog. Content is very specific without much background information but I decided to put this on the web in the hopes it might be hit by a few Google searches and be of use to someone.

Enjoy!