Warning: Creating default object from empty value in D:\home\site\wwwroot\wp-content\themes\throne\include\options\ReduxCore\inc\class.redux_filesystem.php on line 29
Helpful Tech – Regular IT guy

Category - Helpful Tech

How to fix Ubuntu network hangs on client Hyper-V

I find I need to run various Linux distros on my work machine for my new job on the Azure Compute team. While I still run Windows 10 as my primary OS (sorry – app compat reasons) – I have installed Hyper-V Client so I can at least switch out to whatever distro I need for the task at hand. Things were working great for the most part, until i left the machine running in the background or for extended periods of time.

Then it started to happen.  The network stack would say it’s up and happy, but nothing was working.  This happened a lot while i was using Ubuntu and I couldn’t figure it out.  After some back and forth with some networking guys and virtualization guys – the following was suggested…

  1. Disable network-manager:  It just doesn’t play well with virtualized NICs and the way most hypervisors work. Sure it’s supposed to make wireless easier and things like that, but it’s not required in a virtualized environment where your network stack isn’t going to be changing on ya.
    • $ sudo systemctl stop NetworkManager.service
    • $ sudo systemctl disable NetworkManager.service
  2. Disable avahi: it seemed to be the main culprit that was causing the freezing / wonkieness of the network stack.
    • $ sudo apt-get purge -y avahi-daemon avahi-utils libavahi-core7 libavahi-gobject0
  3. don’t forget to edit your /etc/network/interfaces file to include the necessary info to enable DHCP on eth0
    • $ sudo -H gedit /etc/network/interfaces
    • edit or put in an eth0 reference I added in the following and saved the file.
      • auto eth0
      • iface eth0 inet dhcp
    • after saving and exiting the file, restart your network
      • sudo /etc/init.d/networking restart
  4. Install the updated virtual kernel, Linux tools and cloud tools (if they aren’t already present).
    • $ sudo apt-get install linux-virtual linux-tools-virtual linux-cloud-tools-virtual

Low and behold – things are now rock solid. I can leave my Ubuntu running all day, switch in and out of the VM without any issues and I know my networking stack will be in a consistent state without having to restart the VM.

Am I missing anything? Anyone else working with Ubuntu (client) inside of a windows client implementation of Hyper-V?

The impossible task of rapid Disaster Recovery – and how Microsoft Azure can help

I remember when I had to setup my first DR plan for a datacenter failover of critical systems from one datacenter to our backup datacenter.  It was a nightmare of logistical planning and near duplicate hardware / infrastructure required to have the bare minimum systems in place for some level of business continuity. It took months of planning and I-Don’t-Even-Want-To-Think-How-Much money in order to pull it off.

Oh – then we had to test it.

That was a fun weekend of 18 hr days I will never get back.

Virtualization has made things significantly easier than running on bare metal.  It makes your VMs are transportable between different systems running the same hypervisor and managed by tools like SystemCenter to ensure proper synchronization and coordination when something goes wrong.  BUT – you still needed to have duplicate resources / networking setups and physical datacenters in order for this to work… Until Public Cloud datacenters like Microsoft Azure became an option.  With hybrid connectivity,  virtual networks and subnets, firewall endpoints and a wide range of virtual machine base image sizes – you can dial in your configurations to be damn near identical to the VM spec of your on-premises systems AND only pay for what is up and running.

Our Disaster Recovery offering is called Azure Site Recovery.  It recently got a boost in capabilities that now allows you to include VM Guests that run on Hyper-V, VMware – so long as they are running a supported version of Windows Server and Linux distribution that is comparable in Azure.  In the past – Azure Site Recovery required that you had a comprehensive SystemCenter implementation in place in order to protect various workloads. that isn’t the case anymore (although still recommencement for more complex workloads of multiple machines) .  In case you missed it – I recently did an interview with Matt McSpirit (a fellow Technical Evangelist on my team who handles the On-Premises space) where he took us through how Azure Site Recovery works.

If DR is on your list of things to explore and possibly implement in the future OR if you already manage a DR implementation and are interested in simplifying it’s setup and execution – you need to check out Azure Site Recovery.

You should sign up for the preview so you can try it out yourself (scroll down to the bottom)… Once you’ve been accepted (or just want some more detailed step by step instructions on how it is setup) check out this article on the Azure site.


Succeed in a hybrid world – without losing control of your data

Most of us who work in IT generally feel pretty good about the physical and logical security of our data and systems when it resides on-premises.  If you can see and touch the systems, it gives you an added sense of security – especially when you have keycards, biometrics or sometimes just a lock on the door. Augmenting this on premises is easy enough, most times with built in tools like BitLocker or certificate services for a variety of solutions that you can employ to data wherever it lies.

But then we add in a Hybrid connection – to someplace else where you don’t have physical access to the host systems, only remote access to your Guest VMs?  Public Clouds like Azure need some extra assistance if you want the warm-and-fuzzy feeling of your VHDs encrypted while at rest. Now what do you do in order to get the warm fuzzy feelings of keeping control of your data?

Full Volume Encryption with Bitlocker requires a TPM or physical access to the system while booting. At TechEd Europe, my friend Bryon Surace had a session talking about a new partner that was onboarded for the Azure called CloudLink.  They make a two part solution that allows you to centrally manage encryption keys used for boot time decryption on Windows and Linux images as well as data volumes you attach to your machines.

It’s really quite cool – and simple.  Once you have established an relationship with CloudLink, you download their “Cloudlink Center” virtual appliance (a pre-configured VHD), deploy it to an Azure VM (create new VM from image) and login to the Management portal. You then install an agent on Windows based servers that interfaces between Bitlocker and their CloudLink server.  Once the machine boots – it shows up in the management console and you authorize it for operation. Apparently, this can also be integrated into native Linux data encryption mechanisms as well.

Check out their quick demo video on how this logically works – a video is worth a couple thousand words. 😉

Note: it also works with your Hoster c0-location options as well as in your on-premises Hyper-V and VMware private clouds as well.

Very cool solution.  I know a number of customers I’ve spoken to that could use this to bolster their comfort and security levels – potentially unblocking their plans to integrate Azure and Public Cloud into their environments.

High Level Docker Overview – for IT Pros

A while back I shot a quick video with Madhan Arumugam and Ross Gardler talking about what this containerization technology is all about. Looking for information to understand the concepts and options yourself? Have a look at this High Level explanation / whiteboarding of container technologies and their advantages over individual processes and traditional IaaS Virtual Machines.

Madhan Arumugam, Principal Program Manager with the Azure compute team

Ross Gardler, Sr. Technical Evangelist with Microsoft OpenTech

Hybrid Cloud: you know you can set it up, but how much is right for you?

When I talk with Customers about Microsoft Azure, I can usually gauge pretty quickly if they are ready to dive or not quite ready yet. Lets face it, if you are a die hard IT Pro who has been working On-Premises for the bulk of your career, starting to use “The Cloud” can be a little unnerving. That’s one of the reasons I always try to get something across from the start: Using public cloud resources should be an AND conversation, not a mutually exclusive OR conversation.

No one is trying to get you to drop and migrate all your resources out to “The Cloud”.

I started dabbling in Microsoft Azure a while back, when IaaS first came out.  Things have changed a lot since then, lots of new functionality has been added and it’s getting easier and easier to use. I’ve started to think about it as simply “another” location I could use when I decide to deploy new virtual machines. What are your options for connectivity to these machines? You can abstract it out to 4 levels of connectivity:

  1. Remote Management only: When you spin up new systems in Azure – You control remote connectivity to the machine by modifying things called EndPoints. There are only 2 EndPoints that are open for remote management – an RPD session on a custom port and remote management port is open.   End result, you can get into your machine and if there are multiple machines in your setup, they could have connectivity to each other.
  2. Point to Site VPN: I typically see this one as a quick and dirty connection method for a single machine that resides on premises to have unfettered access to the machines up in Azure. Think of this as either a development box or maybe a database server that you want to keep on-premises for whatever reason, but you want the machines in Azure to have two way communication back to it. Simple to setup, easy to manage.  You configure this from the Azure portal and download the VPN client to run on the box.
  3. Site to Site VPN: Similar to the Point to Site, but it requires some additional setup.  You have to define all the subnets you want connectivity to on premises and in Azure and then download a Gateway configuration script. It could either be a hardware router that need to setup on premises or it could be a configuration file that you can load into a Windows Server 2012 R2 RRAS server. The nice thing about this option is that connectivity is not limited to only one system.  Any system that is within the network ranges you defined will be able to route it’s packets out to Azure and Back.
  4. ExpressRoute: This is the ultimate connectivity option if you plan on going full on Hybrid after trying out one of the other three options.  This is a subscription service which can be enabled on your account that leverages an existing connection you have with one of our partner network providers.  Our partner providers have direct connections to various Azure Regions, allowing for a direct connection from your network over their private lines into the Azure Datacenter.  Your packets are never transmitted over the public internet – it all stays within the network of the provider or Azure Datacenter at a very high speed with minimal latency.  This option comes in very handy when you have a large number of resources on premises that need connectivity without latency up to the Azure world.

I have had very good success using both the Point to Site and Site to Site VPN in smaller production rollouts or pilots / proof of concepts. When it comes to a more robust connectivity options, ExpressRoute is definitely the top tier solution.

Breaking news: We made some announcements at TechEd Europe this week – two additional European partners have been added to the ExpressRoute family (Orange and BT).

Stop the insanity, regain control of user management and security

Sometime it’s the fundamentals that get missed when you are in FireDrill mode for too long and need to get things done. Or maybe you inherited a fileserver where there are WAY too many admins and you are troubleshooting access issues. Take a moment, step back and revisit the basics of Group strategies and how they should be applied to all sorts of scenarios. You have to understand the history before you can start with the new stuff.

Wait a second. You’re talking about everyday boring groups? Those things you use to group users together so that you can assign access rights to resources? How is this going to help me regain control of users? Let me share a story.

WP_20140506_08_33_02_ProRecently I inherited a Clustered FileServer that had a couple of thousand users who accessed resources from many, many domains across this international Active Directory forest. Upon further examination, the use of groups WAS employed (poorly), but only ONE GROUP was created. This group gave whoever was a member “Full Control” of the file permissions down through the entire folder structure on the server.  On top of that – it was used across a dozen different shares, accessed by different groups of users across the entire organization. This fileserver was running on aging hardware, constantly getting “full” and was due for a swap to a new solution. How do I handle this while continuing to work on my regular day job?

Procuring the new hardware was easy.


I ordered up a nice 70 terabyte Cluster-In-A-Box from DataOnStorage and got it setup as a Clustered Fileserver. After establishing a large DrivePool and carving out a new Dual Parity StorageSpace – I set about doing some basic Group planning for future access.

Every SysAdmin has their own philosophy on how to assign access rights to shares and folder permissions. There have been some enhancements with Windows Server 2012 R2, but fundamentally things have not changed all that much (A,G,DL,P):

Assign users into
Global Groups. Nest them inside
Domain Local groups and Assign
Permissions to the share / folder structure.

Why do I bring this up? You would be surprised at the number of times I’ve see ACLs (Access Control Lists) for folders / shares that have individual users added directly added to them. Usually as a result of someone granting Full Control to a non technical person (who has no background in managing servers) and them getting a little too advanced for themselves by  changing file permissions, only to “Apply this to all files and sub folders”.

Do yourself a favor. Please explain this concept to anyone who will be managing a folder structure or share on a server. DON’T MAKE THE ASSUMPTION that they know what you are talking about. But also explain to them about reusing groups where it makes sense and possibly “mail enabling” groups in order to make them multi-purposed.  A well managed AD with an understood and communicated Group Strategy will go a LONG way to keep your sanity, keep the users in line and reign in wayward file servers.

That migration project for the file server?  It’s almost done. I’ve practiced what I’ve preached here and contacted the respective owners of the various shares to re-confirm what their requested level of security is.  I’ve create groups and nested them inside local groups on the new server. I’ve also “trained” the owner of the shares what groups are being used and I’ve delegated them the rights to go an manage the group memberships to ultimately control who has access to the resources. I’ve setup some RoboCopy command scripts to copy over data and synchronize  data.  I’m almost ready to flip the switch – just got to get back from my travels on the road and send out the notification emails.

I think some of the follow up from the final process would make a good couple of posts. Stay tuned for more.

If you can’t wait and need to make sometime to figure out what’s coming around the bend  – check out the new EvalCenter with it’s concept of “Tech Journeys” and explore some Hybrid datacenter concepts or Mobile Device Management. .

Installing the Windows Azure PowerShell Cmdlets.

I am assuming you have used the online graphical portal a bit and now you want to be more productive and start some rudimentary automation. We don’t expect you do use the portal for everything. For an IT Pro – the logical choice is to use PowerShell and work like an admin from your workstation. Before I go into more depth on all sorts of components and features/capabilities of Windows Azure, let’s prep your workstation for some automation.

Step 1: Download the files.

Head on over to the download page from the Windows Azure site. http://www.windowsazure.com/en-us/downloads/


This will kick off the download of the Web Platform Installer. This tool will be available on your system to download the current version as well as all the updates we periodically make to the cmdlets.

Step 2: Use the Web Platform Installer to install cmdlets and dependencies.

It’s not just the cmdlets that will download – it’s also all the dependencies that come down and get updated as well. don’t worry – the Web Platform Installer (WebPI) has you covered for ensuring everything is up to date.


Step3: Put the install location into your path

This is optional, but helpful if you will be using the cmdlets a lot. There a a number of ways to do this, but in my opinion, the least invasive way is to update your PATH environment variable with the Azure cmdlets install path.

The cmdlets are installed (by default) in C:Program Files (x86)Microsoft SDKsWindows AzurePowerShellAzure

Pull up your system properties. (I right click on “This PC” or My Computer and choose properties). Click on Advanced System Settings.


Click on Environment Variables


Update the path statement to include C:Program Files (x86)Microsoft SDKsWindows AzurePowerShellAzure. Don’t forget to go to the end of the line and add a ; before overwriting your path!


Close off all your windows with the OK buttons and you are good to go.

Step 4: try it out in PowerShell and PowerShell ISE

Just to make sure – check it out in your PowerShell window and tool of your choice.



Fairly simple and straight forward – but surprisingly hard to find out how to set it up in the easiest way possible. From now on – this system is ready to go with the Azure PowerShell cmdlets.

Step 5: Simplify Your Settings

When you need to run a command against your Windows Azure subscription, the session will need some settings to be referenced a lot. This means you will get a window pop-up to login to your Microsoft Account or account you are using to manage and interact with your subscription via the portal. To make your life WAY easier – if this is your “management workstation” that you maintain and secure, you can download your Azure Publish Settings file including your management certificate. Trust me – it will make your life easier if you do this.  It’s so simple.

From a PowerShell prompt, type in:

PS C:> Get-AzurePublishSettingsFile

That will require authentication to the Azure Portal in order to create your Settings File.


It will prompt you to download and save it to a secure location. Change to that location in your PowerShell window and then type in:

PS C:> Import-AzurePublishSettingsFile

If you were not in the proper directory where the file resides, you will need to include the full path and name of the file.

To check if the settings file worked correctly – check what subscription is active in the PowerShell console session by typing in:

PS C:> Get-AzureSubscription

This should respond with details of your subscription, including details on the management certificate which will be valid for one year.

That’s It – You Are DONE!

Step 6: What about Updates?

That’s simple! Periodically run the WebPI utility to ensure there are no updates.


Notice the date for Windows Azure PowerShell AND that there is no option to “add” it anymore as it has already been installed.  If updates are available that button will become active once again.

That’s about it – if there was an update, it would come down to the appropriate path that has already been added to the system path and therefore all new PowerShell windows and ISE sessions would automatically be updated with new functionality.

Windows Azure–where do I sign up?

(NOTE 3/11/14: clarification on billing with “spending limits feature” has been added. See italicized text in bullet points for additions)

No seriously – where do I get it?

Do I have to get some sales guy to come and sell it to me? Do I contact a software reseller to sell me a copy? Do I need gobs of cash to be able to try this out?

In short – the answer is NO.

The fastest and simplest way to get started is to get your own free trial.

go to www.windowsazure.com


I think it is relatively apparent where you go for the Free Trial – but I thought I’d highlight the arrows with more arrows in red.

You’ve got links to a FAQ, a phone number you can call to answer questions and $200 in credit to spend on your trial. I suggest you take a moment to read the FAQ. There are a lot of preconceived notions that are either false or greatly out of date with regards to signing up for a free trial. I’ll highlight a couple below:

  • You can use the $200 to try out any number of services without restriction (except the $200 credit limit or 30 days – whichever comes first).
  • The trial is absolutely FREE – you will not be charged for anything above and beyond the $200 credit.
    • MYTHBUSTER: we do not charge you for overages or “mistakes” you make during this trial because you are unfamiliar with how billing works and you are in  a “learning phase”.  In the past we did not have a “cap” that could be added to protect early adopters from getting bills they didn’t expect.
  • CreditCard and Microsoft Account are required.
    • MYTHBUSTER: as mentioned above – we do not charge your card for this free trial.  You are welcome to use your business or personal card – they are used for identification purposes only.  I mean – come on- we don’t want people spinning up services and VMs to do BitcoinMining things without knowing who they are.
  • If you exceed the $200 credit limit on this trial or hit 30 days, the services and account will be automatically suspended.  You are welcome to convert the trial into a simple “Pay-As-You-go” option to maintain your services and will be billed accordingly for services use.
    • The Spending Limit feature is targeted to the MSDN and Partner Cloud network members. It is not available on the Pay-As-You-Go or consumption plans. It was designed to ensure these members won’t get billed while they are developing solutions on the Azure Platform.
    • You are able to sign up for Billing Alerts to warn you when you are approaching thresholds and want to proactively scale back before incurring charges. See this article for more details.
  • Azure Free Trials are available in all countries/regions where Azure is commercially available. Windows Azure is currently (as of March 1st, 2014) available in the following 84 countries/regions: Algeria, Argentina, Australia, Austria, Azerbaijan, Bahrain, Belgium, Brazil, Bulgaria, Canada, Chile, Colombia, Costa Rica, Croatia, Cyprus, Czech Republic, Denmark, Dominican Republic, Ecuador, Egypt, El Salvador, Estonia, Finland, France, Germany, Greece, Guatemala, Hong Kong, Hungary, Iceland, India, Indonesia, Ireland, Israel, Italy, Japan, Jordan, Kenya, Korea, Kuwait, Latvia, Liechtenstein, Lithuania, Luxembourg, Macedonia (FYRO), Malaysia, Malta, Mexico, Montenegro, Morocco, Netherlands, New Zealand, Nigeria, Norway, Oman, Pakistan, Panama, Paraguay, Peru, Philippines, Poland, Portugal, Puerto Rico, Qatar, Romania, Saudi Arabia, Serbia, Singapore, Slovakia, Slovenia, South Africa, Spain, Sri Lanka, Sweden, Switzerland, Taiwan, Thailand, Trinidad and Tobago, Tunisia, UAE, UK, United States, Uruguay, Venezuela, Belarus, Kazakhstan, Turkey, Ukraine, and Russia.

Note regarding the Credit Card requirements: All online “cloud” services space require a credit card for identity verification and trials these days. It’s the minimum bar for entry to ensure some level of validation / accountability.  If you don’t have one, you might be able to register one of those “pre-paid charge cards” from a store, provided you registered your information for online purchases – but I’ve never tried it myself.

If you want more details on the plans and how the Spending Limit works – check out this article. If you want to know how to setup Billing Alerts, check out this article.

Fill out the registration details with validation text message or automated voice call.


Once the code gets validated, the payment information becomes available. Once confirmed, you should end up at the Subscriptions page with a “pending status” as we get things setup for you.


This can take some time – click on the (Click here to refresh) option to check on it’s status.  When I wrote this blog post it took all of a minute to be ready.  Once you are listed as “Active” (my screenshot shows “Pending”) you can click on the blue Portal area up in the right corner.

Once you progress to the portal – a quick tour option is available to walk you through the very basic functionality of what the Management portal can do and it’s various notification areas that are context sensitive.


Once you have gone through the quick 5 slices of info – you are dropped into the Management Portal for your Windows Azure account. You’ll be spending some time in here working with the services and setting things up. I’ll be going over a bunch of things I’ve done in here as part of this ongoing series. Take some time, explore a bit and check out the Help in the bottom right corner of the management portal.


Lets have some quick fun – something all of us ServerHuggers can embrace and understand – Lets make a Windows Server 2012 R2 Virtual Machine and RDP into it!. To keep things REAL simple – I suggest you try out the Quick Create of a VirtualMachine from the big NEW button at the bottom left of your portal.


Fill in a unique DNS name (I use my initials RJC with demoVM to make RJCDemovm), create a local admin user name and confirm a admin password. Finally, choose a region/affinity group (where will it be hosted) and click on “Create Virtual Machine”. Once the info has been submitted – Azure will start the provisioning process and give you a status update in the portal. You can see from the shot below – mine is provisioning, it has a name of rjcdemo.cloudapp.net and you can see a job to finish it’s provisioning is running by the animated green bars in the bottom right corner of the portal.


Notice it takes some time to spin up – think of a VHD being copied out of a VM Library and then being assigned into your storage and finally being started for the first time. It has to go through the initial Sysprep like first boot activities and have configuration settings passed through to it via a custom made unattend.xml file (where do you think it got the username and password to create from?).  Eventually it will come up to a Running state.

Once it hits that Running State – you have the billing meter running (against your $200 free credit) to the tune of about $0.10 / hr for a small instance. It’s billed by the minute and you are NOT charged when it is Shut Down – so don’t forget to shut it down when you are done playing with it.

You’ll notice at the bottom, when the machine is selected you can Connect, Restart, Shut Down, Attach / detach disk, capture and Delete. Click on the CONNECT button.


A familiar open/save dialogue opens up – save the file someplace – it’s just a RDP file that has the Fully Qualified Domain Name to your VM and the special non-standard listening port for the RDP connection (in my case it’s rjcdemovm.cloudapp.net:52778). This gets re-mapped to the proper 3389 port by Azure (more on this later). Launch this connection and sign in with the Admin ID and password you filled out in the Quick Connect form and Voila!


NOTE: In case you didn’t know, if you sign in with a .{username} it signifies that you are logging in to the LOCAL account database of the system (since it’s not domain joined AND since I am running this demo from my corporate machine – you can see me authenticate correctly in the middle with local creds).

Accept the certificate warning and the RDP session opens to your new desktop of a server running in the cloud on an ISOLATED network that has been NAT’ed behind the Azure firewall.  Feels like home, eh?  Go ahead – poke around, check out and explore all the sort of stuff you would do when you rack a server or spin up a VM for the first time. Kick the tires and play around – all seems very familiar, eh?

ok – that’s enough for this post.  Once you are done playing around, log off the Virtual Machine and return to the Azure Management Portal.  From there, select the machine and choose SHUTDOWN from the bottom bar.  This will gracefully shutdown the VM and stop the charges for the machine in order to preserve your credit.  If you forget – it’s going to cost you $1.20 to run this overnight for 12 hrs or so – not exactly going to break the bank.

Congrats on taking the first step towards this Cloud thing as a ServerHugger.

it wasn’t so bad now, was it?

P.S. One last thing:

If you are from the developer side of the house in IT – you might already have an MSDN subscription that includes reoccurring monthly credits and benefits that can be activated. If you’re an IT guy who sits on the Infrastructure side of the house – you might want to check to see if your developer brethren have already started using this benefit and see if you can get in to the action. You see – you can have multiple admins and access to subscriptions for access to these benefits.  But really – you probably want your own space to play in and learn.

Need a quick lab/sandbox to try out MSFT technologies?

One of the best parts of my job is talking with IT Professionals / SysAdmins / Students from all over. It doesn’t matter if they are independent IT consultants, staffers / lifers at “company x” or someone just getting started in the IT field – they all at some point ask me about “spinning up a lab” for one thing or another.

That’s when I let them in on a little secret: The Microsoft Virtual Labs Experience.


I use our Virtual Lab Experience when I need to quickly get something up and running. Why?

  • It’s a virtualized Sandbox that can be yours to play in for the duration of the lab time – FOR FREE.
  • It comes with a pre-defined lab including step by step instructions that allows you to test drive all sorts of Microsoft Technologies and solutions
  • Better yet, even though it comes pre-configured – you are NOT RESTRICTED on what you can do within it!

(Yes – I am encouraging you to colour outside the lines)

What can you find in this plethora of sandboxes (as one size does not fit all)? Just over 500+ Hands On labs most providing 2 hours of lab access!!! 60+ labs from TechEd North America 2013 were just posted earlier this week. More RTM updated labs with Windows Server 2012 R2 and System Center 2012 R2 were recently added as well.

You can mix and match which ones to try, select different technologies or scenarios for whatever tickles your fancy. You can even rate the labs and sort them by most popular or most relevant to your searches. Heck – you can get social and share out what environments you are trying out with your various social networks.

Being that I am very passionate about the Windows Server 2012 R2 stack, here is one more tip. You can get a good “trial” by registering to kick the tires with a simple 4 part series:

  1. Windows Server 2012 R2 – Configuring and Managing Servers
  2. Windows Server 2012 R2 – Storage Infrastructure
  3. Windows Server 2012 R2 – Network Automation using IPAM
  4. Windows Server 2012 R2 – Exploring Hyper-V Server

Check out the labs and start giving the team your feedback and opinions using the post-lab survey. They are just down the hall, tell them I sent ya.

If you are like me and are curious by nature about how all this back end virtualization works… check out this interview I snagged with Corey Hynes, architect from HOL systems. They are the guys and gals who host this virtual lab experience for us.


They have a very cool Windows Server powered solution that handles all the heavy lifting to get these spun up on demand when you click on “Launch Lab”.

How To: Delete windows.old from Windows Server 2012 R2

I’ve been updating my various environments from Windows Server 2012 RTM or Preview releases (build 9431) of Windows Server 2012 R2 to the final bits. On some boxes I just use my scortched earth policy of leveling the partitions and starting from scratch – others I will do an install and use the same partition. You get the following dreaded message – which you dismiss and move on.


Sure – I’ll just go and delete that directory after a while and go about my merry way.

Unfortunately it is not that easy.

In Windows client environments, you can just kick off a “disk cleanup” routine and have it removed – saving you a dozen or more GB of space. Unfortunately, that Disk Cleanup does not exist in Windows Server 2012 / 2012 R2 Full GUI install, unless you add Desktop Experience.


Fear not. Once you have confirmed you need nothing from that old c:windows.old directory structure, you can manually delete it, with a little bit of extra effort.

Here’s how you do it.

1) Download Junction.EXE from Sysinternals. I extracted and saved it to c:source. You will use this tool to generate a list of all the junctions that have to be removed.

2) create a reference file that lists all the junction points and symbolic links in use by opening up a command prompt, changing into C:source and running

junction.exe –s –q c:windows.old >junctions.txt

3) open up PowerShell ISE administrator rights and run the following script to remove all symbolic links and junction points in c:windows.old.

foreach ($line in [System.IO.File]::ReadLines(“c:sourcejunctions.txt”))
if ($line -match “^\\”)
$file = $line -replace “(: JUNCTION)|(: SYMBOLIC LINK)”,””
& c:sourcejunction.exe -d “$file”

You should get the following scrolling by…


Now it’s some simple taking of ownership, granting rights and deleting windows.old to get your space back.

4) to take ownership use

takeown /F C:windows.old /R /D Y

5) delete c:windows.old – you now have permissions and ownership.

How much space you get back will change based on your particular situation.  My last run at this saved me 15.5 GB of space on my OS drive.

Note: Kudos to Peter Hahndorf’s response on ServerFault.com on which this article was based.

Warning: sizeof(): Parameter must be an array or an object that implements Countable in D:\home\site\wwwroot\wp-content\plugins\projectnami-blob-cache\project-nami-blob-cache.php on line 416

Fatal error: Uncaught WindowsAzure\Common\ServiceException: Fail: Code: 400 Value: The account being accessed does not support http. details (if any): <?xml version="1.0" encoding="utf-8"?><Error><Code>AccountRequiresHttps</Code><Message>The account being accessed does not support http. RequestId:611946c5-901e-0027-3644-02b261000000 Time:2022-11-27T09:41:01.3330538Z</Message><AccountName>ritgcache</AccountName></Error>. in D:\home\site\wwwroot\wp-content\plugins\projectnami-blob-cache\library\WindowsAzure\Common\Internal\Http\HttpClient.php:382 Stack trace: #0 D:\home\site\wwwroot\wp-content\plugins\projectnami-blob-cache\library\WindowsAzure\Common\Internal\Http\HttpClient.php(275): WindowsAzure\Common\Internal\Http\HttpClient::throwIfError() #1 D:\home\site\wwwroot\wp-content\plugins\projectnami-blob-cache\library\WindowsAzure\Common\Internal\RestProxy.php(141): WindowsAzure\Common\Internal\Http\HttpClient->send() #2 D:\home\site\wwwroot\wp-content\plugins\projectnami-blob-cache\library\WindowsAzure\Common\Internal\S in D:\home\site\wwwroot\wp-content\plugins\projectnami-blob-cache\library\WindowsAzure\Common\Internal\Http\HttpClient.php on line 382