Update on Unitrends

A higher up at Unitrends read my article on here about Veeam vs Unitrends, and a pretty high up at that company gave me an email, asking how they could improve their product. I have to say that it was quite unexpected and sometimes I’m surprised at the readers that I get on here – I’m just some nerdy IT guy I guess, and I don’t know. I mean I realize it’s the internet, but I guess I’m not always aware of the readership I get. I just kind think this is more documenting my experience with products and stuff.UNI_Logo_RGB

Either way, I kind of told them a few of my gripes and he thanked me and let me know that some fixes were going to be in the works with them. A few of the things they highlighted to me were better error reporting in the “Failure” emails and better uniqueness criteria when scanning for VMs – this will help a LOT with cloning and replication because there won’t be the duplicate UUID error that we’ve seen in the past.

Overall, I am really appreciative of a company that is willing to listen to people – especially someone like me who works for a smaller company with a smaller environment. I work in the field and I understand that most issues are only issues when someone reports them, however, in the past when reporting issues, it seems like they just go in a queue somewhere and that’s that – you never really hear about it again or ever see anything happen with it. It’s nice to know that Unitrends listens to their customers and cares how their product performs, and even if I have had my troubles with it, it seems like they genuinely care about whether or not your system works for you and want to make improvements to it so that it works for everyone’s environment – even us with our smaller environment.

I’m not sure if I complained about this or not last time, but I had a small gripe about the CPU on our particular box being under constant load due to deduplication – at the time I thought this was a problem. I did a little power monitoring and found that the box is really only using about 320 watts (on average, as there was more CPU utilization during backup periods, which are the little spikes you see in the graph – obviously power usage went up slightly, and during the lower times it would drop) which in our environment is totally acceptable for more space. If this appliance were for something else, it might be an issue, but since it’s sole purpose is backing up and archiving data, that higher CPU utilization is not a problem to me.CPU

In the mean time, we have decided to extend our service agreement and get into a newer physical appliance as well, so we will see how that goes. I’m sure you’ll be hearing more about this in the future.

8GB FC, 16GB FC, 10GBe iSCSI or 1GBe iSCSI. Which is right for your storage area network?

This is something that I’ve had a lot of first hand experience in, and it’s something that I’ve taken quite a bit of time to look in to as well. The answer to this question is going to be the basis of this write up, and if you don’t really want to read much further, I’m just going to say this: it really depends on your environment. There are pro’s and con’s to each of these, and we’ll hit each of them.

First off, if you are considering upgrading your main storage array, chances are that you are also going to be looking at an entirely new storage networking infrastructure. The reason for that is that things are evolving pretty fast in the storage area network world. The seemingly affordable 10GB ethernet is making a dent, 16GB fibre channel has hit the market (though not strictly affordable for a business our size) and storage arrays featuring each of these are a possibility.

My first major suggestion: do some heavy monitoring of your current environment. See where your peaks are and where your low times are. See how much storage bandwidth you are currently using. Watch your disk queues and see if there are reads or writes just sitting in the pipe waiting to get served up.

Let’s talk about bottlenecks for a second. Bottlenecks can happen at three main places: the server, the switch, or the array, and most bottlenecks happen either at the network or storage array. Sometimes they can be caused by network misconfiguration or by the actual disk in the storage array not being able to keep up with how fast you are requesting reads writes. Sometimes these bottlenecks can be misinterpreted by monitoring as well.

Storage

I’ll give you a quick inside tip: getting relatively high IOPs does NOT depend on the speed of your storage area network. Of course, you do need bandwidth for sustained transfer speeds (if you are doing large reads and writes), but if your traffic is bursty and requires some relatively high IO in short bursts (SQL Server comes to mind), you don’t need a lot of bandwidth. What you need is fast response time and fast IO. Now, that being said, how do you get high IO over… let’s say 1GBe? It’s all in how the array handles your IO.

Let’s say for example that you are disk bound (this meaning that the disks in your array just can’t keep up with the reads and writes, which is a fairly common issue among all spinning drive arrays, unless you have enough spindles to keep up with it or a fat read / write cache). Basically what this means is that as your servers are pushing out writes and trying to get reads and the disks just can’t keep up with how fast you’re trying to push / pull the data – this means that your storage bandwidth is not an issue – its the actual storage array that is having trouble filling that bandwidth – though monitoring may interpret that as bandwidth lag, because you’ll see your storage bandwidth being hit kind of hard because the network is waiting to get reads and writes.

Monitoring is essential, but it’s also VERY important to know how to interpret your monitoring – you need to monitor multiple places – your network, your servers, and your storage and be able to interpret all that data. Most ethernet and fibre channel switches have SNMP on them which can be used to monitor specific ports. ESXi has many types of monitoring you can use – from things like VMTurbo to Operations Manager. Using SNMP and a graphing program like LogicMonitor or Observium you can really drill down to the port level and see which servers are using a lot of bandwidth and / or storage utilization.

When you get a nice array on the back end, you’ll be shocked at how little bandwidth it actually takes to get pretty high IO, but monitoring, when selecting a storage area network, is your best friend. You need to know you current environment and not really listen to the salesperson. The salesperson is trying to sell you something and you are trying to make the best purchase for your environment. In this case, you need to know as much about your environment as possible so that you don’t spend a ton of money on things that you will under utilize.

There does need to be happy medium though between growth potential and your current bandwidth. If you are planning on growing to more servers and more IO, you need to plan accordingly.

VMworld and VMware Updates

I have to say, VMworld was pretty fun this year and it’s bringing forward a LOT of really good (and needed) updates to the vSphere suite of products. Veeam announced Veeam 8, which has some sick new features too. I’ll touch on all these later. For now, let’s talk VMware.

Fault Tolerance for VMs with up to 4 CPU Cores

Honestly, I was expecting better, but this is a HUGE improvement over a single core CPU. That was one HUGE limitation for Fault Tolerance in our particular environment. I know that VMware suggests using one CPU on some VMs (like small domain controllers), but I find that it takes FOREVER to update these VMs when they only have a single core.

vSphere Web Client that can manage ESXi Hosts

To be honest, I’m not a fan of the web client. But…

Updates to the Web Client for Stability and Speed

Alright, this is awesome. They demoed this at VMworld and I have to say that it looked MUCH more responsive and I’d say it was it bad need of an overhaul.

VMFS 6

Apparently there will be a lot of nice tweaks around VMFS and LUN Provisioning

VVOLS

This is a pretty sweet technology that will most likely replace the idea of RDMs and has been in development for about 2 or 3 years. More info on that here: http://blogs.vmware.com/vsphere/2012/10/virtual-volumes-vvols-tech-preview-with-video.html

All in all, nothing really huge was announced (other than Veeam 8, which is going to be excellent) other than some cloud stuff.

VCAP-DCA Lab Setup – #2 – Host Configuration

One of the first things to do is to prepare your lab environment. This, at first, was pretty complicated for me (probably because I was overcomplicating it), but it really isn’t too hard.

There are a few things that you need to do, however, if you want performance and reliable networking from within your nested lab.

The first thing I’ll go over is VMware Workstation configuration, and from there I’ll go in to the next post with getting your lab domain (Active Directory) set up.

I’m going to assume that you have all the requirements I said in the first post (Quad Core processor or better, preferably i3, i5, i7, 32GB of RAM, SSD / RAID / HDD of about 200 – 400GB, with VMware Workstation installed). The screenshots are from my little lab setup and are, as I said, on a DL360 G5 running Windows Server 2012.

Incidentally, you don’t need much on your lab host, I stuck VMware Workstation, the .NET Framework 2.0 feature (not really sure if this is a requirement, I did this more out of habit) and a web browser (preferably Chrome, but FF will work too, the only reason I say Chrome is preferable is because it has embedded Flash support – this is especially important if your host is going to be your Control Station).

So, on to Workstation. First, you’re going to want to change where your VMs are stored on to your faster / separate storage. If you are lucky enough to have your OS installed on a 1TB SSD, I guess you can ignore this part, but for those who are wanting their VMs installed on separate storage, this is for us.

  1. Open VMware Workstation
  2. On the top menu, click “Edit” and then “Preferences”
  3. Under “Workspace”, choose a folder on the drive you want your VMs stored. As you can see in the screenshot, mine are going to be stored on the V:\ drive in a folder called VMs. Creative, I know. VMware Setup
  4. Click “Ok”. That part is done.

Next is the networking – I had some trouble with this at first because I was determined to make it harder than it really was. For the most part, the NAT adapter is all we will need (we will NOT be doing physical VLANs or anything like that in this particular lab, since this is basically a “lab in a box” and I am using a dumb physical switch).

To do this the way that I did it (which has been working quite well for me) is to do this:

  1. Open VMware Workstation
  2. Click “Edit” and then “Virtual Network Editor”
  3. I deleted all the VMNET adapters just so that I had a clean slate
  4. Add a new network by clicking “Add Network” and choose “VMnet0”. It doesn’t have to be VMnet0, it can be any number you like, but the purpose of this, I’m going to assume you did VMnet0.
  5. Once it is created, choose NAT as the network type.
  6. Next, make sure that “Connect a host virtual adapter for this network” checked.
  7. DHCP is totally optional. I left it enabled just so when the VM first powered on, it would have an IP. I’ve read some people say turn it off, but I guess I don’t really care that much if it’s on – as long as you know that you’re going to end up static IPs anyway for your VMs.
  8. Next, choose your subnet. For this particular setup, I’m going to be using 192.168.137.0/24 for LAN and 192.168.147.0/24 for iSCSI / Storage. You can only have 1x NAT VMnet per host, so you really only need one. NAT
  9. You’ll see that I also have a bridge adapter as VMnet5 – this is for my Control Station – I wanted that on dual subnets so that I could RDP from my regular network from a different computer. If you plan on doing it that way, just bridge one of your adapters in your computer / server that has physical access to the network.
  10. Click ‘OK’. That will about do it for now.

 

This is just about all it took for me to get my configuration rolling. I’ll get to getting some VMs up and running next – the next thing you’ll want on your lab network is a domain controller. You’ll hear me refer to it a lot as a DC, so if you see me mentioning something about a DC, just know that I’m talking about a domain controller.

VCAP-DCA Lab Setup – #1 – Prerequisites

I’ve been looking around for a good lab setup using either ESXi or Workstation, and despite that a lot of people tell you “how to do it”, there really isn’t a solid how-to on setting a up a lab for getting hands on experience with the things you’ll need to know for VCAP-DCA. So, that being said, I’m going to start a “This is how I did my lab” post series and walk you through how exactly I set up my lab and whether or not it worked.

So for starters, there are quite a few prerequisites, and here they are:

Hardware:

  1. A CPU that has some horsepower – I would say that for this setup you want at least a quad core, and the newer the better (i3, i5, i7 series have Intel VT). If you have an older processor that’s fine too, it just means that you won’t be able to run 64 bit VMs on Nested ESXi (which is what we are going to, nested ESXi). This isn’t a huge deal though because running VMs is the easy part of this – not to mention that 32 bit VMs will be fine for the lab.
  2. RAM – You’ll want at least 16 gigs, 32 gigs preferable and anything above that is even better
  3. Storage – You’ll want at least 200GB of pretty fast storage. You could probably get by with a single 7200 RPM disk, but I promise you that it will not run very well, and that your labs will remind of the labs at VMworld. You’ll probably want an SSD or at least some type of RAID array for your lab. The faster the disks, the better as well (10k, 15k). I think 200GB is minimum, you’ll probably want closer to 400GB.

You’ll need some software, and we can probably get away with “trial” versions of everything, provided you work at it every day (since most trials, especially VMware, will only last 60 days), but you’ll still need some ISOs to do all this with, and you’ll want those before you start the lab. I’ll lay those out really quick and why you need them.

Windows:

  1. Windows Server (at least 2008 R2 x86 or x64). This will be used as a domain controller. You’ll want Active Directory integrated when we start working with stuff. It only takes a few minutes to set up Active Directory, so you are best of just doing it.
  2. Windows Server (at least 2008 R2 x64). This will be for vCenter Server. We will need vCenter Server.
  3. Windows 7 or 8 (x86 or x64) as your “Control Center”. This will basically be your management box. You can use the physical host if you want, though I am going to whip up a VM for it.

VMware:

  1. vCenter Server (Standard). We’ll need this. Obvsiously.
  2. vMA (VMware Management Assistant). This is a OVA you can download and deploy.
  3. VMware Workstation – I’ll be doing this all on Workstation 10, but if you want to a nested lab on ESXi, you absolutely can, it may just be set up slightly differently (especially the networking).
  4. VMware vSphere PowerCLI – this gets installed on your Control Center box.
  5. VMware ESXi 5.5.0 – latest revision. We’ll use update manager to get it all up to date, so the latest one you can download from VMware will work.

Others:

  1. Your choice of OpenFiler or FreeNAS.

For this particular setup, I’m going to use an older HP DL360 G5 (that I just happen to have lying around) with 2x Intel Xeon 5430 (quad core, so 8 cores all day), 48 GB of RAM, 2x 10k 146GB disks in RAID 1 for the OS, and 4x 146GB disks in RAID 5. Yes, RAID 5 is not the best choice for writes, but I am using a little program called Primo Cache as a RAM write cache. If you have spare memory and want a RAM write cache, I would highly suggest getting that program – it’s optional, so I’m not going to call it a requirement, but it definitely helps.

For software, I’ll be using VMware Workstation 10, OpenFiler, and Server 2012 for all Windows Server based stuff. I’ll get to configuring these in the next post, but this is what you will need at this point to get started.

I’ll get to the configuration

 

 

Veeam VS Unitrends

In our business, we take backup, recovery, and replication pretty seriously. We have a variety of different backup utilities, storage devices and appliances and devices dedicated as replica datastores. Anyway, we currently use a combination of Veeam and Unitrends. Though I am very partial to one of those two – and I’ll go through each of them, and their pros and cons because they both do have some very good things and of course they both have a few things that could be improved. Veeam happens to be the one I am partial to, so this article will just be pitting them against each other and basically ramblings about why I prefer Veeam over Unitrends.

Unitrends comes in a few different packages – you can buy an appliance from them (which is what we have) and they also now offer Unitrends Enterprise Backup, which basically is a VM that is almost the same thing as theUNI_Logo_RGB appliance, other than you use shared storage (or datastore storage) for your backup repository. A few things Unitrends really excels at is compression, deduplication (it has native dedupe), etc… It’s Linux (CentOS) based, and uses a lot of custom built packages to back up that data. It uses Postgres for the database (holds all backup metadata). You can map iSCSI, NFS, or CIFS volumes to back up to as a repository (at least with the VM, the appliance does not allow you to do this, or at least with our license it does not). I think one of the things that Unitrends shines in is that it can basically back up anything – SQL Server, Virtual Machines (HyperV or VMware), Physical Machines (bare metal or certain disks). You can also create Bare Metal restore Media (CD / DVD) and restore a physical server that way.

Veeam on the other hand is installed on a Windows machine, requires a SQL Server database (it will install SQL Express if you don’t have a full out SQL Server infrastructure). Veeam does an extremely good job at veeamcompressing VMs (very high compression ratio, at least compared to Unitrends). What’s also pretty nice about it is that it’s VERY fast – we back up VMs over a WAN connection (20 meg line) and for the most part they take less than 2 minutes a piece. There are some that take a little longer than that, but for the most part, they are very fast even over lower latency and slower networks. In contrast to Unitrends, Veeam really only backs up virtual machines (HyperV or VMware, depending on your license). I think one of the main things I really like about Veeam is the GUI and the program interface. It’s really straight forward and clean and easy to use. Unitrends, in contrast, has a really hokey web based UI that is flash based. There are things that are buried in there in all kinds of odd places and it’s just not as clean and straight forward as the Veeam interface.

So that said here is the comparison:

Veeam Pros:

1. Solid GUI
2. Fast backups (using change block tracking)
3. Awesome compression
4. Solid Feature Set
5. Robust backup options (Incremental, Reversed Incremental, Full
6. Custom Backup Repositories (as in physical disk, SAN to SAN, etc…)
7. Replication is included with the license

8. You can install it just about anywhere – a VM, a physical machine – though I would recommend a physical machine because depending on your compression ratios, CPU usage could get a little heavy

Veeam Cons:

1. Backing up Physical Machines is not an option at this point
2. Change Block Tracking doesn’t play nice with other backup appliances (to be fair, this isn’t Veeam’s problem, this is more of a VMware problem)
3. Price – it’s pretty expensive – but depending on what you need backed up, it’s worth it

Unitrends Pros:

1. Backs up just about anything (HyperV VMs, VMware VMs, Physical Windows, Linux, Solaris, etc… servers)
2. Either a VM or a physical appliance, but the physical appliance has to be from Unitrends. You can’t just use your own hardware.
3. Restores are pretty quick

Unitrends Cons

1. Unintuitive and unattractive GUI
2. Does not play nice with any kind of Replicas or VM Clones
3. Bare Metal restores are painfully slow (over network)
4. Reporting is not so hot – when there is a problem, it gives you only a “Job Failed” email, with no description as to why. To find out why, you have to go digging.
5. Pretty expensive and licensing fees are convoluted

Like I said at the beginning, at first I was a little hesitant of Veeam – mainly because of the expense, but the constant improvements and updates that come by way of patches, it’s really an awesome application. I would choose it over Unitrends if I had to decide today.

Opsview – Installing Opsview Server on CentOS

If you are looking for a decent server hardware and software monitor – I have to recommend Opsview. The reason I suggest this is because of a few huge factors – and I’ll name those here. One is that they have a “free” community edition (and yes, it is free, but you get zero support from them and you have to host it yourself… oh and there is an annoying ad on top of the whole web based dashboard). If you have a halfway decent network and systems infrastructure, this is no problem. I’ll work through how to set it up here in a few.

In any event – it can monitor a huge set of information about each server and it supports Linux (though mostly the RPM based and APT based distros), Windows, and OSX . You can choose what you want to monitor (different hard drives, partition sizes and space, RAM, Windows services, Linux services, system performance, SQL Server, MySQL, CPU load, etc…). The best part is the mobile app which allows you to check out the status of your systems while out and about. That doesn’t exactly make it easier to fix any issues that arise – but at least you’ll be able to check out any issues that may arise.

Notifications are also a handy part of this application – for our little enterprise, we get email notifications when our set services are failing on each specific server. It lets you schedule downtime as well so if you want to stop being spammed while you are rebooting a server, you can just schedule some downtime for that particular unit.

So anyway, lets get to installing Opsview Core Server – after the server is installed I’ll show you how to install the clients / agents. If you are lazy and think setting it manually is too much trouble, you can just get the VMware Virtual Appliance (Ubuntu 10.04 Server, x86). I will be running you through how to set it up on a stock, newly created CentOS VM (this will work pretty close to the same on RHEL, and it will be the exact same on a physical machine). I don’t think there are any listed “system requirements” on their site, but here is the Dorkfolio.net system requirements:

CentOS 5.8 or 6.3

At least 2GB of RAM (I’ve tried this with 1 gig before and it didn’t end well. This box idles at about 1.1GB, so you should be good with 2).

At least 2 processors 

And at least 20GB of hard drive space.

A few other notes on this – this can either be dedicated server or a shared server. The actual management GUI is on port 3000, so it doesn’t interfere with web or MySQL, so if you want to make this a server that has some other function, by all means, just be sure to note that it does use about 1.1gigs of RAM just idling and when you are using the GUI, it also is a bit of a CPU hog.

Alright – Let’s get started shall we? I am going to assume here that you meet the specifications above and that you have a fully patched CentOS 5.8 or 6.3 box that you have root access to.

First we need to become root:

$ su

Enter in your credentials and now you are root. Next we need to add the OpsView Repository.

# cd /etc/yum.repos.d
# nano opsview.repo

In the opsview.repo file, paste this:

[opsview]
name = Opsview
baseurl = http://downloads.opsview.com/opsview-core/latest/yum/centos/$releasever/$basearch
enabled = 1
protect = 0
gpgcheck = 0

We now have the opsview repo installed. Now we can run

# yum list updates

It should come back empty – but now the repo will be up to date. Now we can go about actually installing the OpsView Server.

# yum install opsview

It will find all the dependencies for you (which, you’ll note includes MySQL Server – with OpsView Core, it has to be self hosted – as in you can’t host your MySQL Server on another box and use that, it has to be locally hosted – if you use OpsView Pro – you can use a different MySQL Server – but remember this is at a bit of an expense, if your MySQL Server goes down, OpsView also goes down.)

It will take a little bit to get up and running – depending on your hard drives / SAN / NAS / where ever you are installing it, how much CPU power it has, etc…

Once it is all installed, we now need to set up MySQL. I prefer the “Secure Setup” way, to I’d run this:

# /usr/bin/mysql_secure_installation

It will prompt you for a MySQL root password – make sure you remember this – we’ll need it. After it’s finished, we are just about all good with MySQL.

Next, Nagios (the foundation which OpsView is built on) tries to create a suitable environment for the server – it creates a new user called “nagios” We need to verify that it is set up correctly by running this:

# su - nagios
echo "test -f /usr/local/nagios/bin/profile && /usr/local/nagios/bin/profile" >> ~/.bash_profile
exit

If those top two commands work (which they should), we are all good. Next we need to edit a few config files.

# nano /usr/local/nagios/etc/opsview.conf

We need to change the two passwords that say “changeme” to the password we set as your MySQL root password.

Next we need to run a few Opsview scripts to bind OpsView to the MySQL Server and set up all the tables.

# /usr/local/nagios/bin/db_mysql -u root -p{MySQL root password}
# /usr/local/nagios/bin/db_opsview db_install && /usr/local/nagios/bin/db_runtime db_install

Those second two scripts may take a few moments to run and you may or may not get some warnings about UTF-8. You can ignore those.

The last thing we need to do is regenerate all the config files based on what those scripts ran. We can do that by running this:

# /usr/local/nagios/bin/rc.opsview gen_config

If this fails for some reason or either, it may be because the log is not writable by Nagios. I fixed this by doing this:

# chmod 777 /var/log/opsview/opsviewd.log

Once that is set up, we are basically done and we can now start the opsview-web service by running:

# service opsview-web start

We’ll also want to be sure opsview-web and mysqld start on boot

# chkconfig --level 345 mysqld on
# chkconfig --level 345 opsview-web on

Now you are all set up. If you open a browser on your OpsView server and punch in http://localhost:3000, you should be presented with a shiny new login screen.

To log in for the first time, the username is “Admin” and the password is “initial”. You’ll obviously want to change these straight away. The rest is all done via the handy GUI.

If you want to be able to access this on different machines, you’ll need to either have a FQDN or a static IP, with firewall open for port 3000. You can then access it by pointing your browser to http://ip_of_opsview_server:3000, or http://FQDN:3000.

VMware Workstation Tech Preview

I have to say that I think virtualization is awesome. The idea of it is slightly mind boggling when try and tell someone what exactly it is, but from the first few times I heard about it and wrapped my head around it, I thought it was amazing. There are, at times and in my opinion, features that VMware / Virtualbox / HyperV lack(ed) though. And one of those was obviously 3D acceleration from within a Linux VM.

This bothered me more specifically on Desktop VMs rather than Server VMs. With virtualized servers, obviously you’re looking for something light and non-intrusive. Virtualization provides a way to cut costs with servers because technically you’re getting “more with less”. Most virtualized Linux based servers are something like Red Hat Enterprise, CentOS, or Debian (all of which are not intensively resource heavy).

But with a Desktop VM, I prefer to have the more of the flashy effects that are possible with newer desktop environments (though I’m hardly a fan of these newer desktop environments). I am a compulsive “distro hopper” and before I rewrite that second partition on my drive with a new Linux build, I like to test it out and see how I will like it before I make the jump.

Step in VMware. But oh hey, VMware Workstation 8 doesn’t provide 3d acceleration. Well that sucks.

Luckily for us, VMware now has a Tech Preview out (that expires in October which is, I’m sure, when we will see VMware Workstation 9 drop) that supports 3D Acceleration on some Linux builds (more specifically I’ve seen it work with Ubuntu + Gnome, Linux Mint + Cinnamon and most other Ubuntu based distros). It’s easier on these Ubuntu based distros because there is already support for it in the repo. If you are using Fedora or OpenSuse, you’ll have to build the driver from source.

All you have to do is install your distro in VMware and run this:

$ sudo apt-get update && sudo apt-get dist-upgrade

Then install VMware tools, and there you go. Yay.

Beginning Learning SQL – Post 2 – User Creation and Sample Database Installation

Well, having finally found a sample script for a database (I started writing my own, but between my job and my life, it is a little too cumbersome – I wrote about 200 lines and I was like… someone else has to have written this already) we are ready to begin learning SQL – this is only provided that you followed post 1, and that you have a server (either virtualized or a real server with either SQL Server 2008, 2008 R2, or 2012 on it). See my earlier post here if you need help getting set up.

To install this sample database, you’ll need a System Admin account as we are creating schemas, databases, tables, constraints, functions, etc… We’ll get in to these at a later time, but for the time being, installing this database will suffice. I am going to assume that you are competent enough in computers to have opened a firewall port, installed SQL Server Management Studio on a second computer (or else that you are comfortable remoting in to you SQL Server to use SSMS). You can do all of this through SSMS itself on the actual SQL Server with the local administrator account. Follow the how to here on how to do that:

Log in to your local SQL Server box (either a VM, whatever as a user with Administrative rights). In my case, all I did was just use the standard Administrator Account that is default in Windows Server 2008 R2. Open up SQL Server Management Studio and log in using Windows Credentials.

Once you are logged in, open the Security Node and then the Logins node on the left. Right click on Logins and the first tab that comes up is “Create New Login”.  

Create a new login, save and you’re done.

You can now log on remotely to your box with some SQL Server Authentication credentials (provided you’ve opened port 1433 on your servers firewall). Some people prefer using solely Windows Authentication, that’s fine. If you want to do it that way, you can read up on that here.

Once you log in, you’re going to be looking at  a program that (depending on your installation options) looks something like this:

Pretty nifty. To install our sample database, download THIS – this is a bunch of SQL Code stuffed in to a zip file. I would list it as code on this site, but alas, its almost 7000 lines long and its much easier to zip it.

Unzip it and you’ll have a file called LearningSQL.sql. Double click on it and it should open in to your SSMS Query Editor.

To execute it, you can either press the “Execute” button at the top left side of the SSMS application, or hit F5. I would strongly suggest using F5 – you’ll want to get used to that if you are looking at becoming a DBA. This script may take a few moments to run (depending on how much power your server has, network speed, etc…) though it shouldn’t take longer than a minute unless you are using a toaster as your SQL Server.

Within the next few days we will learn some pretty basic queries on how to recall data from these databases you created – but for now you can explore the data we uploaded, learn your way around SSMS, and get comfortable with some shortcuts, because they will be extremely handy.