I am going to start writing in this again starting shortly.
I have a lot of stuff to talk about this next year – including a few things that you should be on the lookout for.
1. vSphere 6
2. Windows 10 / Windows Server “Threshold” (2015)
3. 10TB Hard Disks
4. Containers – probably not my favorite thing in the world, but a few companies are making a dent
I’ll try to be a bit more vigilant in the coming year writing.
We’ve recently been hit over the head with the news reporting a supposed hack of Apple’s iCloud Service (perhaps not the service itself, but it could have been brute force password hacks, the “I forgot my password” questions or maybe even social engineering, all of which is still considered “hacking”) and the distribution of private photos of celebrities. This post may get slightly political, but I will my best to keep the political undertones to a minimum and attempt to focus solely on something that these celebs may be lacking and that is something called common sense.
We live in a cloud era. I detest that word because it means so much, yet it’s often misidentified or people just don’t have a clue what it means – there is the private cloud, the public cloud, the hybrid cloud – cloud this, cloud that, iCloud. What exactly is the “Cloud”? In this case, it just means somewhere other than you local device. I can, on my Android phone, set my photos to be backed up to “Google Drive”. This would be part of Google’s Cloud. It’s no longer local to my phone but it is offsite somewhere.
This has a lot of good things about it: I could lose my phone, but hey, my photos are still on Google Drive. I could trash my phone and not be able to get anything off of it, but hooray, my contacts are still sync’d with my Gmail account. This also means that once it is no longer on your device, that you no longer have complete control over it, and it is suddenly much more suceptible to hack attempts. Let’s face it, the only way you can get to something that is ONLY on my local device (provided internet / data connection is turned off) is to get my phone out of my physical hands and in to yours. This is true up until the point that is no longer only on my physical device – once it’s uploaded somewhere on the internet, there opens up a lot of new possibilities.
If you have something of value (in the celebs case, I would say, against better judgement, that naked pictures are of value), you become a target. That’s right, if you have anything of potential value, you become a target – and to the celebs out there, now that the hackers know that you keep naked pictures on your phone, you are about to become even more of a target.
There is a LOT of publicly available information about celebs. That means when choosing some “I forgot my password” questions, you need to do better than your birthday (public domain), your dogs name (public domain), your first boyfriend / girlfriend’s name (public domain).
So here it is – if you don’t EVER want incriminating data / photos to end up in the hands of the masses, don’t keep it somewhere connected. The “cloud” is only as safe as you make it with your passwords and guess phrases and it will NEVER be 100% foolproof. Even if you choose a password that is 48 alpha numeric characters long with special characters – if your password reset questions are simple and easy to guess (based off of information that can be found publicly), it’s still somewhat easy to get access to your accounts. Even if you choose insanely hard password reset questions and crazy long alpha-numeric passwords, if the software has a bug in it, it STILL may not be safe. This is the era we live in.
The advice of someone like me who works with cloud based devices is that you should never keep anything incriminating anywhere that may be remotely public and even then, it’s just not a good idea to take naked pictures and keep them on your phone. That has absolutely zero common sense to it, and to some people, it’s deemed as valuable information, which makes you a target.
I don’t think that the celebs can blame anyone but themselves for this happening to them, but I do still feel bad for them. I would not wish that to happen to anyone, but I find it pretty crude that the people that kids look up to are doing this kind of stuff, and really, it’s about common sense.
I’m going to start redesigning this site – a pretty big makeover. So please bear with me if there is any wierdness.
I’ve decided to discontinue compiling the Ubuntu kernel that I used to keep moderately up to date. I’m doing this for a few reasons:
- I’m not really an avid Ubuntu / Linux Mint user anymore. I tend to use more enterprise based releases (CentOS / RHEL / Oracle Linux) and rolling releases (Arch).
- It takes a long time. Compiling takes a toll on your computer / HDD and it takes too long and I really don’t have the time to do it anymore.
For now, I would suggest checking out this link: http://kernel.ubuntu.com/~kernel-ppa/mainline/
I’ve started helping with a BSD distribution with some code and site administration – the distribution is called GhostBSD. I can’t say as I have ever offered my help to a distribution and developers like this, but quite frankly, I believe in this project, and I believe in it for a few reasons:
1. It’s a really forward and progressive system. Normally when people think about BSD and Unix, they think of a somewhat unstable and unpopular distribution with very little support and no GUI – GhostBSD installs MATE right out of the box and makes it very user friendly. It’s distributed as a “desktop” system, but the resource usage is so low that I really think that, even with a GUI, it would make an extremely viable server distribution. I plan on migrating some of our Apache servers over to GhostBSD once 4.0 stable is released.
2. The support is growing, big time.
3. The new Package Manager (pkg / pkgng) is awesome in FreeBSD 10. In fact it’s so good that I think it’s up there with ease of Ubuntu (apt-get).
4. It’s really minimal – it’s not a bloated down distro with a ton of stuff bogging up the system. It’s really lightweight and user friendly.
5. It’s extremely easy to use – once again, people tend to think of BSD and Unix as hard to use and not supported – the installer is a genius GUI called GBI (Graphical BSD Installer) that helps you with partitioning, installation, the works.
I would really encourage people to give it a shot – http://www.ghostbsd.org. I’ve found that VMware Tools are a little finicky on it (in fact, the official VMware Tools are not supported on FreeBSD 10 yet, and you have to use open-vm-tools).
Well, this is a huge ruh roh… for just about anyone on the internet (either server side or client side). There is a huge “implementation bug” in a bunch of currently deployed OpenSSL packages. There are numerous operating systems affected – CentOS, Red Hat, Oracle, Debian, Ubuntu, SuSE / OpenSuSE… and quite honestly, the more I read about this thing, the uglier it’s getting. They are calling this thing “Heartbleed” – a play on words for Heartbeat which is where the exploit originates from.
So, first off, you can’t just update OpenSSL and call it good. That’s not how this thing works. Secondly, just about anyone using a modern operating system as their web server is probably affected.
Here is where it gets icky… it’s not just you. In order to completely solve this thing and make sure people are not eavesdropping on your encrypted sessions, you’re probably going to need to revoke and rekey your SSL certs. Since this is, as I said, affecting everyone who uses OpenSSL is affected, and this includes CA’s (Godaddy, Comodo, etc…).
According to the bug report, it’s considered and “implementation bug”, or basically a programming mistake.
Anyway, it’s a huge issue, so you should probably figure out if you’re affected and start working on patching.
More info here: http://heartbleed.com/
Observium is a pretty nifty tool I stumbled upon while we were consolidating some of our monitoring trying to save on some costs on offsite monitoring. We wanted to be able to keep past data for reference as well as have data from today. Observium sort of fit the bill and on a whim, I just whipped up a server to check it out.
Before we go much further, I’ll just talk about the server requirements – it requires Apache, MySQL, PHP, SNMP (net-snmp), RRDTool, and a few other monitoring plugins. For hardware, you could probably get away with on a VM with 2 cores, 2 gigs of RAM to start with, though scaling will definitely come in to play. MySQL does not have to be on the same server either – you can definitely put it on another server. At first, I tried installing this on CentOS 6.4 (CentOS 6.5 has since come out and I have not tried that), but some of the graphs were not displaying properly. After that I tried on Debian Wheezy and things worked much better – so I would suggest using Debian even if you’re a RHEL / CentOS shop (like we are).
A few other notes before we go over using it – when I first started using it, it was a 100% free / open source project that was updated straight from their SVN repository. Since then, they’ve forked in to a “Community Edition” and a “Professional Edition”. The “Professional Edition” will set you back about 160$ (which I don’t have), but I would have to say that it is totally worth it and sooner or later, we will most likely do the same.
The Community Edition is released bi-yearly and it will carry with it all the updates that are committed to their “Professional Edition” repository. The only thing this really lacks is an alerting mechanism. It’s really for monitoring only. According to their documentation, their Professional Edition does have alerting built in.
A few things it monitors really well: Switches, Firewalls, VMware (ESXi Hosts), Linux. Some of the things it lacks in are monitoring more specifics – for example, you can’t really monitor MySQL without doing a bunch of work with an agent (which I find to be more trouble than it’s worth, especially since things like Nagios / Nagvis do that pretty well without having to do a bunch of supplemental configuration).
Anyway, on to some screenies:
Some switches and other equipment are supported MUCH better than others (for example, this Dell PowerConnect has a ton of information, but a Netgear Switch that I also monitor has very little information).
Other things will monitor more than just network interfaces – CPU / Memory / Disk I/O:
Disk structure and CASL: The system itself is set up in a somewhat different way – all spinning disks are in RAID 6 (Dual Parity) with one spare (meaning you could technically lose up to 2 disks before you really had to start sweating bullets). The SSDs are not in any kind of array and that bears repeating – they not in any type of array. Putting anything in any type of protected RAID Array (meaning anything other than a RAID 0) means that you start to lose capacity. Each SSD is individually used to the fullest of it’s capacity for nothing but cached reads. So… the question arises “what if your SSD dies?”. Well… not much to be honest – the space that was used for cache is lost, and the data that was in that cache was lost, but it’s a read only cache – meaning that the data is already on the spinning disk, it’s not longer in cache. The data that is deemed worthy of cache will be re-pushed to cache as its worthiness is noted.
So, on to the data write operations. I made up a little flow chart that shows you just how the data is written to disk and in what process it is written in.
A few other little tid bits on writing data – compression is decent. I personally think LZ4 is a decent compression algorithm – in the past I’ve used several in SAN infrastructure, mainly ones like LZJB (pretty close to LZ4), the standard GZIP levels, and LZMA (once again, pretty close to LZ4). LZ4 strikes a pretty good balance between compression and CPU usage – obviously the higher compression, the more CPU usage it requires to compress data. Using things like Nexenta, I prefer the GZIPs over the LZJB – mainly because the CPU usage is not out of line (except when you use things like GZIP 8+, then it starts getting a little high).
According some Nimble Documentation, compression speeds happen at about 300 MB/s per CPU core. LZ4 (and most Lempel-Ziv based compression) has awesome decompression – decompression can happen at RAM speed in most cases (1 GB/s+). Compression is optional, though it is enabled by default. We have shut it off on several of volumes (namely the Veeam Volume, since Veeam compresses so well already that we weren’t gaining anything by having it on, and our Media server, since photos are already compressed, it doesn’t offer much in the way of compression). For things like VMs and SQL Server, I think you can expect to see a 1.5 – 2.5x compression ratio – SQL Server especially compresses well (2.25x+).
You’d think with compression, that you may notice a performance hit while compressing and decompressing data like that, be we have not noticed any type of performance hit, only huge improvements.
Next are the read operations – once again, a little flow chart:
So, let’s talk about cache for a second (accelerated reads). Any “hot” data is written to SSD Cache. The system serves that hot data from Cache and it obviously responds really quickly to changes. Obviously reads are a lot faster coming from SSD than HDD (RAID 6 reads aren’t horrible, especially when coming sequentially, but SSD is still generally a lot faster, in the realm of about 6 – 10 milliseconds on HDD vs 200 microseconds on SSD (about .2 milliseconds).
I may have previously mentioned that these use standard MLC SSDs (multi layer cell solid state). Generally speaking, in the enterprise, the “best” SSDs go in this order. SLC > eMLC > MLC > TLC, and TLC being the worst by quite a long shot. SLC’s, generally speaking, can endure more reads and writes than their MLC counterparts. If you want more info on why that is, read this. Honestly, when the engineer told us it was just plain MLC, I was like… wow, this product seems so genius and then throw MLC in there… what for? I asked that question while he was in there, and he gave me some pretty good answers – and the main one is that it all comes down to how you use it. SLC SSDs will work very well and endure random writes (by a lot, we are talking 100,000+ writes per cell). Since MLC drives have 2 bits written per cell, the cells can potentially wear out faster and since Nimble uses MLC, it converts random writes in to sequential writes, which minimizes “write amplification“, or basically spammy writes to cells, which thereby prolongs the life the SSD. Data on the SSDs are compressed and are “indexed” with metadata which also increases cached reads.
Next post, I’ll get in to networking and such.
Well, I have to admit that I was pleasantly surprised by Server 2012 and the improvements to HyperV. The major improvements that I have noticed since I started using it are these:
1. Live Migrations between Cluster Hosts are insanely fast. This is due to compressing the data between the two hosts. You notice a slight bump up on the processors (maybe going from 2% utilization to 12%), but it’s not enough to do anything. I’ll be the first to say that this feature is amazing – migrations take literally a few seconds – these are VMs with 6GB of dynamic memory (using about 2 gigs).
2. Virtual Machine Version 2.0. I haven’t used this much yet, however, some of the features look great and some don’t look so good. The main one is boot from SCSI disk and shared SCSI disks. That means you can create Virtual Clusters with extreme ease. The problem with Virtual Machine Version 2.0 is that it’s lacking the RemoteFX stack. This is not a problem if you’re hosting servers, but if you are doing a VDI deployment (like myself) this is a bit of a problem.
3. Deduped Cluster Shared Storage. This has been something I’ve been waiting for – dedupe in Windows is awesome. In my little VDI environment, I created each individual VM (not using a template) tailored for each user. This is good and bad – it mean each user retains their settings, however, it means when you thin provision and then use 30 gigs of space in one VM, it’s 30 gigs of space used on the volume. This is not a problem for a good SAN as a good SAN will compress that in to about half that size, but when you start having 20 or 30 VMs, it still can add up. Enter deduped cluster shared volumes. Dedupe would be awesome – since most VMs are Windows 8 or Windows 8.1, they all have about 10 – 15 gigs worth of the exact same data. Dedupe will take that data and instead of having for all 20 or 30 VMs, store it only 1 time and each time a read is looking for a certain chunk of data, it will read the chunk that was deduped.
4. GPT / UEFI boot – this isn’t a huge thing since most VMs are needing 2TB+ boot disks, but for those who are, you’re in luck.
All in all, it looks like a lot of big improvement.