Set Up Windows Event Forwarding with Sysmon using Group Policy. (Free SIEM Part 3)

This is the third tutorial in the “Free SIEM” series.

Today the aim is to set up log forwarding to a central log Server from all our end points with Group Policy, and as an added bonus we are going to forward all Sysmon logs as well.

For the topology we have a Domain Controller (DC), and separate Event Log collector server (EL), and other Windows Desktops on the domain (WD).

First we open Group Policy Management Console on our DC, to create a new GPO for our forwarding rules. For the purpose of this tutorial our test domain is named “glitchcorp.co.uk”, wherever you see this you should replace with your own FQDN.

Our new Policy is named “Event Forwarding”

Go to “Computer Configuration/Policies/Administrative Templates/Windows Components/Event Forwarding” to create our Target Subscription – basically the log server which will be collecting all the forwarded logs (EL). Right-click the highlighted option.

Enable the setting and then copy the highlighted text and add your server details and set the final option (Refresh=) to 60 as shown.

Save the configuration. Now we set permissions for the Security log to ensure it can be read. Go to; “Computer Configuration/Policies/Administrative Templates/Windows Components/Event Log Service/Security” Right-click and “edit”

Enable the setting, but we need to permission string for the “Log Access” box. For this we need to open Powershell.

We use “wevtutil.exe” to get the existing permissions and add the new account to the end. Run the command below then copy the string that is returned. Paste this into your “Log Access” box but at the end add either (A;;0x1;;;NS) or (A;;0x1;;;S-1-5-20). This will give the “NETWORK SERVICE” read access to the logs. (NOTE: Due to the way Sysmon works this will not grant access to Sysmon logs. We will set this in the Registry using a different method)

Save your settings. Next we go to “Computer Configuration/Policies/Windows Settings//Security Settings/Restricted Groups” Right-click and Add Group as shown.

Then add the members as shown. (You only need one entry for the NETWORK SERVICE but I had some issues so added both ways here then saved. If it identifies both without issues, then keep “NT AUTHORITY\NETWORK SERVICE”, and remove the other). Save your settings.

Now we make the Registry change for Sysmon log permissions.

Go to; “Computer Configuration/Preferences/Windows Settings\Registry” and Right-click to add new Registry Item.

Complete as shown. The full path is shown below, and the Value data is the same as we used earlier.

This is the “key path”

This is a reminder of the Powershell query

Paste this into your “Value Data” box but at the end add either (A;;0x1;;;NS) or (A;;0x1;;;S-1-5-20) as before. This will give the “NETWORK SERVICE” read access to the logs. Save your settings.

NOTE: Don’t run the below command. This is just to show basically what the Registry entry is doing, and give you some understanding. You could run this command if you were forwarding logs from a single machine but in a large environment you should use Group Policy to prevent using lot’s of scripts, or running the same thing over and over on each individual machine.

OK so we have now setup the Log forwarding location, and the permissions required, now we need to ensure the required services are running on the source computers on the Domain so they can forward the logs to our collecting server.

Browse to; Computer Configuration/Preferences/Control Panel Settings/Services/

Right-click and select New Service

Complete as shown

Save your settings then do the same for Sysmon.

You should have 2 entries as shown.

Now we need to configure the Firewalls to listen, and allow them to “push” the Event Logs to the EL server.

Go to; “Computer Configuration/Policies/Administrative Templates/Windows Components/Windows Remote Management/WinRM Service” and right-click the highlighted options.

Configure as shown.

Save your settings. Now we go to “Computer Configuration/Policies/Windows Settings/Security Settings/Windows Firewall with Advanced Security”

Right-click “Inbound Rules” New Rule

Complete each box as shown

That’s the GP work completed so let’s open powershell on the DC and update the domain policy. You can also run this command on any endpoints you want to ensure are up to date with these settings reading for testing.

If you want to test that an endpoint is receiving the new policy you can use the command below. You can see under “Applied Group Policy Objects” our “Event Forwarding” policy is there.

NOTE: Each of the endpoints you will be sending logs from may need to have the following command run from an elevated Powershell window “WinRM quickconfig”.

It all depends on what OS you are running, but if it is already running, this command will not do any harm. If running Win 10/Server 2012 R2 it should already be running.

We head over to our EL server now and start to complete the set up on the collector. Run the below from an elevated powershell window.

Then open Event Viewer

Let’s create our first subscription. Right-click and create a new Subscription.

That’s the Sysmon Subscription sorted, now we need one for the other Windows logs.

Right-click and repeat with different settings this time as shown.

Enable both Subscriptions so they have the green tick.

You can right-click each one and check “Runtime status” this will show a list of connected machines.

Now go to “Forwarded Events” and watch all your logs come through. Make sure you are seeing entries for “Sysmon”, “Application”, “Security”, “Setup” and “System”. (Although in my screen shot all you can see is Sysmon lol!)

Congratulations! Yes it’s a bit of a slog but it is worth it. Make sure you come back for part 4.

Important Event ID’s you should be monitoring in Windows.

Security
EventIDDescription
4756A member was added to a security-enabled universal group
4740A User account was Locked out
4735A security-enabled local group was changed
4732A member was added to a security-enabled local group
4728A member was added to a security-enabled global group
4724An attempt was made to reset an accounts password
4648A logon was attempted using explicit credentials
4625An account failed to log on
1102The Audit Log was cleared
4624An accout was successfully logged on
4634An account was logged off
5038Detected an invalid image hash of a file
6281Detected an invalid page hash of an image file
Application
EventIDDescription
1000Application Error
1002Application Hang- Crash
1001Application Error – Fault Bucket
1EMET
2EMET
System
EventIDDescription
104Event Log Cleared
1102The Audit Log was cleared
4719System Audit Policy was changed
6005Event log Service Stopped
7022-7026,
7031, 7032,
7034
Windows Services Fails or crashes
7045A service was installed in the system
4697A service was installed in the system
7022EVENT_SERVICE_START_HUNG
7023EVENT_SERVICE_EXIT_FAILED
104Event log was cleared
6New Kernel Filter Driver
Firewall
EventIDDescription
2005A Rule has been modified in the Windows firewall Exception List
2004Firewall Rule Add
2006, 2033,
2009
Firewall Rules Deleted
Terminal
Services
EventIDDescription
23Session Logoff Scceeded
24Session has been disconnected
25Session Reconnection Succeded
1102Client has initiated a multi-transport connection

Install Graylog 3 on Ubuntu 18.04 (Free SIEM Part 2)

Hello all, this is the first of a new series of posts which will show you how to setup a free centralised logging solution for any environment.

After much trial and error I think I’m set on using Graylog, Windows Event forwarding, Sysmon, and OSSEC/Wazuh.

All the official documentation for Graylog can be found here: Graylog Docs

Ubuntu is still my favourite flavour of Linux so we will be starting with the base install of Server version 18.04.

Let’s get started, as always we start by updating the repository

sudo apt-get update

And if required upgrade your install. (If you are starting with a fresh install  but didn’t tick “download updates from the internet” you will need to do this)

sudo apt-get upgrade

Now we are running up to date let’s start with installing the dependencies. First up are these 4 packages, make sure you do all these steps in order or it will not work.

 sudo apt-get install apt-transport-https openjdk-8-jre-headless uuid-runtime pwgen 

If you get no errors when installing we move on to installing mongodb from the official repository.

 sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4 
 echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list 
 sudo apt-get update
sudo apt-get install -y mongodb-org 

If again you receive no errors, we move on to enabling it on start up.

sudo systemctl daemon-reload
sudo systemctl enable mongod.service
sudo systemctl restart mongod.service

Graylog recommends using Elasticsearch version 6. You can find the installation guide here if you need to refer to it, but you can install using the following. (This is not the latest version, which is not supported so don’t be tempted to try it)

 wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - 
 echo "deb https://artifacts.elastic.co/packages/oss-6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list 
 sudo apt-get update && sudo apt-get install elasticsearch-oss 

Before we can configure and start Elasticsearch we need to edit the configuration file which is located at “/etc/elasticsearch/elasticsearch.yml”

We cd to the correct directory

cd /etc/elasticsearch

Then open the file

sudo nano elasticsearch.yml

then find the following line, remove the ‘#’ to uncomment the line and set the cluster.name property to “graylog” as shown below.

cluster.name: graylog

You also need to add the below to the config file.

 action.auto_create_index: false 

Now start Elasticsearch, and enable it at startup.

sudo systemctl daemon-reload 
sudo systemctl enable elasticsearch.service 
sudo systemctl restart elasticsearch.service

Now we are ready to install Graylog, cd into your download or tmp directory and download the latest repo config.

 wget https://packages.graylog2.org/repo/packages/graylog-3.0-repository_latest.deb 

First we unpack the download and then install graylog using apt.

  sudo dpkg -i graylog-3.0-repository_latest.deb
  sudo apt-get update && sudo apt-get install graylog-server  

Now don’t get carried away, because there is still a bit of work to do before graylog will start.

All the instructions we are contained in the following file “/etc/graylog/server/server.conf”

we can open it directly using the following;

sudo nano /etc/graylog/server/server.conf

Take the time to read through the instructions, it will help you to understand a little of what you are doing. With that in mind, let’s continue. Exit nano using CTL and X.

First we create our “password_secret” from the cmd line. using the below cmd to create the hash.

 pwgen -N 1 -s 96

Then open and save the config again and paste the resulting hash into the config file after “password_secret = ”

 sudo nano /etc/graylog/server/server.conf 

Save and exit, then we create our “root_password_sha2” (Remember this as you will need it to login to graylog later on) in a similar way from the cmd line so save your change and exit the config file.

You could run “echo -n yourpasswordhere | shasum -a 256” as suggested in the config file however the online guidance is to use the below.

 echo -n "Enter Password: " && head -1 </dev/stdin | tr -d '\n' | sha256sum | cut -d" " -f1 

Copy and paste this new hash value into the server.conf file after “root_password_sha2”

OK, so now we will be connecting to graylog over http, to be able to use https we need to configure a proxy server which wont be covered here, so always connect over a vpn if in production and you are not using https. Don’t make the web interface externally available. To configure https have a look at the docs here

Also you should enable the host firewall to only allow ports 22, 9000, and 8514, however don’t enable it yet. Get it setup and confirmed as working, then enable your firewall, as we will show later.

To configure the web interface we need to set two further options in the same server.conf file. These options are; “rest_listen_uri” and “web_listen_uri”

Get the IP of your server with the ifconfig cmd, then paste it into the location shown below and make sure the the line doesn’t have a ‘#’ at the start of the line meaning they are commented out. If the ‘#’ is there remove it. this sets both the Web interface: and REST API: options.

http_bind_address =  yourIPaddress:9000/

Save and close the file. If you want more information on configuring the web interface see the documentation here

All that’s left to do is start and configure graylog to enable at startup

sudo systemctl daemon-reload
sudo systemctl enable graylog-server.service
sudo systemctl start graylog-server.service

That’s it, give your server a restart with the following

sudo shutdown now -r

Browse to “yourIPaddress:9000/” and you should be greeted with the following login box. If not, try manually restarting all the services (mongobd, graylog and elasticsearch) using the steps through this guide and see if that resolves it. If not, you’ve done something else wrong!

Now we we know we can connect let’s enable the firewall

sudo ufw enable

And open the 2 ports we need for connecting to it

sudo ufw allow 22
sudo ufw allow 9000

You can check status as below

sudo ufw status

You can also check the status of graylog as shown below

sudo systemctl status graylog-server.service

If you have any issues you can use the following command to view the logs and look for clues.

 sudo tail -f /var/log/graylog-server/server.log 

Come back for the next part as we setup a complete SIEM and logging system.

Add Linux endpoint to existing OSSEC monitoring Server.

If you have an existing OSSEC server this tutorial will show you how to add a linux endpoint which we want to monitor as an agent.

Now on this new server (also ubuntu) we run very similar commands as for the OSSEC monitoring Server; We need to update our repo and install required dependency’s.

sudo apt-get update
sudo apt-get install build-essential
 sudo apt install libpcre2-dev zlib1g-dev 

Download the latest build to the tmp folder

wget https://github.com/ossec/ossec-hids/archive/3.3.0.tar.gz -P /tmp

Extract

sudo tar -zxvf 3.3.0.tar.gz

Go to the directory, then install

cd /tmp/ossec-hids-3.3.0
sudo  PCRE2_SYSTEM=yes ./install.sh

Now you will need to answer the questions:

  • Installation type – agent
  • Where to install – use the default (just hit enter)
  • Server IP address (this is the IP address of your monitoring server)
  • Run Integrity Check – y
  • Run rootkit detection – y
  • Enable firewall drop – y (you can add your IP address to the whitelist, just in case)
  • Hit enter – If you get any errors then most likely your build-essential has not installed correctly)

Now back to the OSSEC Server so we can add the new agent allowing the two to communicate.

sudo /var/ossec/bin/manage_agents

Select ‘a’ from the options and complete the details for the agent.

Now the agent is added we need to extract the unique key and import it to the agent server.

Select option ‘e’ then make a note of the key or paste it into a file.

When finished select ‘q’ to quit.

Now we return the the agent Server and run

sudo /var/ossec/manage_agents

This time select ‘i’ to import, then copy or paste your key as instructed.

If the key is correct you should get a success message.

Now we need to restart our agent server, then log back in and check that OSSEC is running.

sudo /var/ossec/bin/ossec-control status

If it is not running then use

sudo /var/ossec/bin/ossec-control start

Back on the monitoring server we need to restart the services like so.

sudo /var/ossec/bin/ossec-control restart

That’s it. If you setup email alerts you will already have some notifying you of logins and agents being added.

If the agent is not reporting you may need to open the host based firewall to allow 1514 which is the port OSSEC uses.

ufw allow 1514/udp

In a future blog we will look at adding our own alerts.

Install OSSEC 3.3.0 on Ubuntu 16.04 To Monitor Your IT Infrastructure

I’ve been using OSSEC for a few years now and really like it. Recently while migrating my infrastructure I managed to ruin the install and so decided it would be quicker to reinstall the new version from scratch rather than repair then upgrade the existing install. This was all performed on a fresh install of ubuntu 16.04

Just note that if you wish to monitor other assets on your network you will need to set a static IP for this server. I do this using my router to set static arp entries however you can just set the server to use a static IP in the network config, but that is not covered here. Have a “duckduckgo”, there are thousands of articles telling you how to do that!

Jump onto our box, and update our repository as always.

sudo apt get update

Then we need to get the prerequesites before installing OSSEC.

sudo apt-get install build-essential

Now download the latest version to our preferred destination.

wget https://github.com/ossec/ossec-hids/archive/3.3.0.tar.gz -P /tmp

You will want to verify the checksum hash if this is going into a production environment. (We’ll do tutorial on verifying hashes in the future)

Now we ‘cd’ to the location and extract the tar file we just downloaded.

cd /tmp
sudo tar -zxvf 3.3.0.tar.gz

This gave me a folder named ossec-hids, so we cd into it

cd ossec-hids-3.3.0

Then run the install script.

sudo ./install.sh

Now you will need to answer the questions:

  • Installation type – server
  • Where to install – use the default (just hit enter)
  • Email notification – y (then enter your email address and smtp details if you want to receive emails)
  • Run Integrity Check – y
  • Run rootkit detection – y
  • Enable firewall drop – y (you can add your IP address to the whitelist, just in case)
  • Hit enter – If you get any errors then most likely your build-essential has not installed correctly)

Initially I received an error and a message that the install could not continue.

The error stated “can’t cd to external/pcre210.32/ && \ ” as shown below.

cd external/pcre2-10.32/ && \
./configure \
	--prefix=/Downloads/ossec-hids/src/external/pcre2-10.32//install \
	--enable-jit \
	--disable-shared \
	--enable-static && \
make install-libLTLIBRARIES install-nodist_includeHEADERS
/bin/sh: 1: cd: can't cd to external/pcre2-10.32/
Makefile:766: recipe for target 'external/pcre2-10.32//install/lib/libpcre2-8.a' failed
make: *** [external/pcre2-10.32//install/lib/libpcre2-8.a] Error 2

 Error 0x5.
 Building error. Unable to finish the installation.

I’m no linux genius but I guessed that I was missing a dependency, namely libpcre2-8.a. I had a look for anyone else who had similar issues and found this thread;

https://github.com/ossec/ossec-hids/issues/1663

This also seemed to confirm what I suspected by di show I would need to run a cmd I’d not seen before. The solution if you’re installing as shown here is below.

sudo apt install libpcre2-dev zlib1g-dev

Once this had completed (with no further errors) I fired up the install script again, but this time included the new directive in the install script. (PCRE2_SYSTEM=yes)

sudo PCRE2_SYSTEM=yes  ./install.sh

Now run through the install questions again;

  • Installation type – server
  • Where to install – use the default (just hit enter)
  • Email notification – y (then enter your email address and smtp details if you want to receive emails)
  • Run Integrity Check – y
  • Run rootkit detection – y
  • Enable firewall drop – y (you can add your IP address to the whitelist, just in case)
  • Hit enter – If you get any errors then most likely your build-essential has not installed correctly or you need to run through the previous section again)

Now were ready to fire up OSSEC

sudo /var/ossec/bin/ossec-control start

or check the status like this

sudo /var/ossec/bin/ossec-control status

The other thing we should do at this point is enabled the firewall and only allow the required ports. 1514 is to allow it to communicate with agents on the network that it is monitoring. 22 is to allow ssh so you can connect to the server using putty. If you don’t connect over ssh then don’t allow port 22.

sudo ufw enable
sudo ufw allow 1514/udp
sudo ufw allow 22/tcp 

I also use Graylog so I need to open an additional port to allow the logs to be ingested

ufw enable allow XXXX/udp

If you want to use Graylog with OSSEC the tutorial is here; https://2code-monte.co.uk/2018/04/02/ossec-logs-into-graylog/

To add agents for Windows machines is here: https://2code-monte.co.uk/2018/04/02/add-windows-server-to-ossec/

I’ll do a new blog for adding linux agents soon. Now up here; https://2code-monte.co.uk/2019/06/10/add-linux-endpoint-to-existing-ossec-monitoring-server/

Until next time!

How to remove old Kernel and header versions.

While trying to update a server today I was receiving a dpkg error warning that the disk was full.

A simple cmd will show disk usage

df -l

This showed that my \boot partition was full!

We need to browse to the partition

cd /boot

Then list the contents

ls

This showed it was full of old kernels and header versions, so how do we remove these without breaking everything? This is how I did it.

First we need to find out which version we are using.

uname -r

whatever version you are using, DO NOT REMOVE IT!

Let’s see what other versions dpkg thinks we have.

 dpkg -l | tail -n +6 | grep -E 'linux-image-[0-9]+' | grep -Fv $(uname -r) 

Now we can see what we have we can remove them, but note you should only remove those labelled ‘ii’ in the output. If they are labelled ‘iU’ DO NOT remove them.

ALWAYS make a backup/snapshot before making major changes!!!!!

To remove a kernel image we use;

 sudo dpkg --purge linux-image-4.4.0-142-generic 

In my example I state version “4.4.0-143” but yours will be dependent on which versions show in the output in the previous step. I however received an error stating a dpkg error.

No worries we know we can remove the version as it is not in use so we should be able to remove the dependency

We run the same cmd but this time against the dependency.

 sudo dpkg --purge linux-image-extra-4.4.0-142-generic 

Excellent, let’s try and remove the kernel again.

Excellent, so we just repeat the steps for all the versions and their dependencies

I tried to run ‘apt-get update’ but it shows the headers are still installed.

So I used the same cmd to remove the headers

sudo dpkg --purge linux-headers-4.4.0-142-generic 

Then tried running “apt-get update” again, but when trying to install updates I received the same output that headers are still installed. I tried to purge again but get a dpkg error that the header is not installed. Hmmmm.

I’m not a fan of “autoremove” as I have broken many an installation using this cmd incorrectly. However as I have manually removed these I’m pretty confident it’ll be OK. I created another snapshot just to be sure, then went for it.

sudo apt autoremove 

No errors, so I updated Grub next

sudo update grub 

Again no errors and able to update without issues. Phew. No to carry on with what I was trying to do in the first place.

Upgrading To Graylog Enterprise. (Free SIEM Part 4)

We have covered Graylog a fair bit, but to make the most of all it’s functionality we need to upgrade to an Enterprise license. Now before you start screaming “I want a FREE solution” Graylog Enterprise is free for up to 5GB of data a day, and if you are using more than that then you should be paying for it. You can use Graylog for logging without a license, however you won’t be able to make use of the Enterprise Plugins. It depends how much functionality you want from Graylog.

First we need to request a license by going to https://www.graylog.org/downloads and completing the form shown below.

NOTE: To get your Cluster ID log in to your Graylog instance, go to the “System” Tab and select “Overview”. 

Now while we are waiting for our license key to arrive we need to install the Enterprise Plugin Package to our Graylog Server. As we are running the latest version it’s as simple as below

sudo apt-get install graylog-enterprise-plugins -y

Now we need to restart the graylog server

sudo shutdown now -r

While the server is rebooting hopefully you should have received an email which contains instructions for getting your license key. Once you have your key you need to head back to Graylog and import our key. Go to the “Systems” Tab and select “License”. Then select the “Import New License” button and paste in your license.

Paste your license key here.

You will then receive a message that your instance is activated, and your license will show under installed licenses on the same page.

That’s it, we are now licensed and ready to make full use of Graylog. Check back for the next steps of our free SIEM, and eventually adding threat feeds, and custom alerts to our Graylog server.

How To Install Nessus Vulnerability Scanner

In our previous post we showed how to download and verify the hash of the Home edition of Nessus, and here we will show how to install, setup and run your first scan. This is a very basic setup to get you up and running quickly with the free version of Nessus. If you are going to be using in a live production environment, don’t use this guide.

If you have not read the initial post go here then come back.

You should be where we left off which is just after checking our hash and confirming against the checksum as below

Get used to running the “ls” cmd to check the directory you are in and that you have access to the correct files.

We “ls” to check as mentioned above then use “dpkg -i” which will de-package and then install Nessus.

While that runs and installs we need to go back to tenable.com and get an activation code.

Fill in the details and wait for your email with the activation code.

Go back to your Terminal where the install should now have completed. Check the Window for errors and if there are none we are good to continue.

Run the following “/etc/init.d/nessusd start”

Nessus is now running, so open a browser on the same machine and go to https://localhost:8834 and you should get the login screen.

Create a Username and a Password to login  for the first time (don’t forget these!) and you will get the activation page.

Leave the scanner type, and enter your activation key which you will have received by email.

If correct you will see the next screen, now is the time to make a coffee as this stage may take some time as Nessus sets up.

Once completed you will see the “New Scan” screen.

Select New Scan as shown, then select “Basic Network Scan”. This will allow us to do a basic scan of our internal home network.

We will need to find out the IP of our network for the next step, the easiest way to do this is to use either “ifconfig” on a Linux box or “ipconfig” on a Windows box. Run this from a Terminal Window and make a note of your ip address. (This is a simple step so if unsure how to do it you really shouldn’t be installing Nessus to be honest!)

Name and description can be whatever you want, but the IP Address in the targets box needs to be the IP address you want to scan. In the example it shows “192.168.0.0-255” which means that we are going to scan every address on our network. The scan an individual host you would use a single address, for example “192.168.0.5”. Now save as shown below and you’ll go back to the main page.

To start your scan click the chevron, as outlined below, then wait for your scan to complete.

Once the scan completes, the real fun starts!

How to verify a file hash in Linux

We have recently shown how to do this in Windows so we will now show how to do this in Linux. Here we will be using Kali but it will work with most Linux distros.

We want to download the free Home Version of Nessus but want to make sure the file has not been tampered with before we install.

We browse to the download site and download the version we need but also copy the hash checksums to simple text files for comparison later. You can do this by simply copying to your clipboard and then paste into a blank text file.

We will download everything to our download folder to make things simple. Once everything is done you should have 3 files in your download folder as shown below.

Now off to the cmd line so open a Terminal and “cd” to the Downloads folder as shown, then use “ls” to list the directory to also confirm you are in the correct location and the correct files are there.

Now we run “sha256sum Nessus-7.2.0-debian6_amd64.deb”. The cmd part is “sha256sum” and the next part is just the file name you want to hash.

You should see the output of the cmd which is your file hash to compare to the one from the site that you had copied earlier.

Now copy that hash output and paste underneath the one you have from the site. We used sha256sum and so will need to compare against the sha256 checksum. 

As you can see, the highlighted one is our output, and they are a perfect match. Excellent, we can now install Nessus with confidence that it has not been tampered with or had malicious code added.

Our next post will see us install Nessus.

 

How to add SCSI drive to Ubuntu Server

While messing about in my home lab I found I needed to add a SCSI drive to a virtual machine and had no idea how to do it. Although this doesn’t have anything to do with security I’ve posted it here as I know I will definitely need to do this again at some point. Here we have already attached the new VHD to a SCSI connection in the Virtual machine management console, so the disk is physically connected but needs to be formatted, labelled, and initialised.

To list all connected disks run

ls /dev/sd*

this will return the following

If you are unsure which is the new disk, then the easiest way to check is to disconnect the new SCSI drive and run this command, then reconnect the disk and run the same command again and compare the results. The disk that wasn’t there the first time is obviously your new disk. The disks should be labelled “sd” then with a corresponding letter. The primary disk is usually “sda” with the partitions numbered – so sda1 would be the first partition on the first disk. In our example sdb is our new disk so we will be applying changes to this, if your disk is named differently then you will need top replace “sdb” with the name of your own disk in the following instructions.

Once we have identified the new disk we are ready to launch “fdisk” utility.

sudo fdisk /dev/sdb

Always look at the help menus when using new tools as it will help you gain an understanding of it rather than just blindly following instructions. Help menu is shown below

From here we type “n” for a new partition, then “p” for primary and accept all the defaults.

You must then select “w” to write and save the changes, if you don’t do this then the partition will not be created.

Now that we’ve created our partition we need to create the file system

sudo mkfs.ext3 -L DATA /dev/sdb1

The “-L” switch sets the partition label, here we have used “DATA” but you can chose anything, and we are setting it on our new partition which is “/dev/sdb1”

Now we mount the filesystem

sudo mount /dev/sdb1/DATA

To check the location is mounted we can run

Df -l

Next we make a directory

sudo mkdir /Documents

The final thing we need to do is change a configuration so that this new location is mounted every time the server is restarted.

We do this by editing the following file.

sudo nano /etc/fstab

Then add the following line as shown below. (If you named your partition differently then use your label name instead

/dev/sdb1          /Data        auto       defaults    0  0

That’s it. Check this config by restarting your machine then running the list disk, and show mounted commands again and you should see that your new partition is already mounted as shown in the screen shots below.