Warning: Creating default object from empty value in D:\home\site\wwwroot\wp-content\themes\throne\include\options\ReduxCore\inc\class.redux_filesystem.php on line 29
Azure – Page 13 – Regular IT guy

Category - Azure

Now what do I do? Next steps after signing up for an Azure Trial.

With the expanse of what Microsoft Azure can do – it can be daunting to roll up your sleeves and get started without further guidance. Let me get you on the fast track for learning what is possible in a way that is both simple to understand and relevant to what you are doing now.  Makes sense, doesn’t it?

A while back I started a Category called “Azure4ServerHuggers” – specifically designed to get you up and working with Azure if you are already experienced in the On-Premises world. I haven’t made the time to continue down that path of posts in a while – something I am going to change, starting now.

Lets set some ground rules about what you need to know in order to follow these ongoing posts.  Check out these two posts to get started:

  • Windows Azure–where do I sign up? – info on how to sign up for the Trial and all the background info you will need to know. Plus I do a “Quick Create” Windows Server virtual machine and connect up to it via RDP.
  • Installing the Windows Azure PowerShell Cmdlet. – Lets face it – you can start in the GUI, but ultimately you need to start working with cloud services in an efficient manner with the proper tools. Get used to working on it via PowerShell.

Now lets work on a baseline sample test environment. My friend Joe Davies and the Azure documentation team have created a set of environments I will be using as my lab setup for all of these “Azure4ServerHuggers” posts going forward. These documents are called “Test Lab Guides” and are designed to have a consistent sample environment from which to start tests. We will ultimately be using the “Setup A Hybrid Cloud Environment for Testing” which looks like the graphic below.

hybrid-cloud-config

This setup can be done on a single Hyper-V or other virtualization box that has a public IP address associated with 1 network card.  Your CLIENT1, APP1, DC1 will be running in an isolated network which has the dual nic RRAS1 box connected to the Public internet. This setup assumes you configure your “corpnet” as per the “Base Configuration for Windows Server 2012 R2 – Test Lab Guide” and then further configure the RRAS1 box to act as your gateway.

But we aren’t going to jump there just yet…This is our starting point.

CorpNet

I wanted to show you where I was heading – a Hybrid Cloud environment that allows seamless access of cloud resources through a gateway, probably the most tangible method of useful connectivity a ServerHugger like me can grasp.

Ultimately, in order to use Azure – you just need an internet connected machine, a trial account and then use the GUI portal or PowerShell (you really should learn to use PowerShell with Azure, have I mentioned that yet?).  Go ahead! Get started with creating a few machines via the portal.  Don’t Worry – any resources you make in Azure are “confined” inside the datacenter and your subscription until you OPEN endpoints from the internet. Until you create a gateway to your private network – all connectivity and traffic has to go across the public internet.

While I am encouraging you to “play” with this before this series gets started – DON’T FORGET: All Public Cloud Services are consumption based and you will be charged for running systems up in Azure and other cloud providers.  Azure bills VMs by the minute based on size of machine and time running.  An A1 machine (1 core, 1.75 GB ram) will cost you $0.002 US a minute (approx. $0.13 an hour) to run but our largest G5 series machine (32 cores, 448 GB ram) costs $0.13 a minute (approx. $7.80 an hour).

Make sure you shutdown any machines you create which are not required after working with them in the lab – I’ll show you how to automate this in a future post.

If you are looking some consumable video content before we get started – I’d suggest checking out this list of courses on Microsoft Virtual Academy.

A couple of topics I will be covering in this series (list will grow over time):

  • Quick Create of a VM via new portal.
  • Creating a VM via Powershell
  • Creating Multiple VMs on the same vNet
  • Scale OUT (multiple VMs) and Scale UP (VM size and performance optimization)
  • Designing IaaS for higher availability to minimize service interruptions
  • Creating a Hybrid Environment
  • Having a PaaS Azure Website talk to a VM on a vNet
  • Migrating on-premises / other Cloud Provider VMs to Azure

Do you have suggestions for additional topics? Leave them in the comment area!

Trust. You have to build it one step at a time

I think the title says it all. Working in Cloud environments with various cloud providers you start to realize your comfort level with their services comes down to how much you Trust your provider and the services they provide.  When issues come up – how they are handled and what measures are put in place to prevent them from happening again are small steps forward to continuing to build trust. I get asked the trust question a lot by customers who are considering using cloud services.

Today MSFT takes one more step forward by announcing we’re the first provider to adopt and adhere to ISO/IEC 27018 – an international standard for cloud privacy. It is one of many different ISO certifications and attestations that our cloud services achieve. This one is very cool as it relates specifically towards privacy and “Your Data”.  I suggest you have a read at Brad Smith’s blog post to get some more specific info and links.  Some quick points from his post I found interesting:  By adhering to ISO 27018, we’re committed to protecting your privacy and data in a number of ways:

  • You are in control of your data. Adherence to the standard ensures that we only process personally identifiable information according to the instructions that you provide to us as our customer.
  • You know what’s happening with your data. Adherence to the standard ensures transparency about our policies regarding the return, transfer, and deletion of personal information you store in our data centers. We’ll not only let you know where your data is, but if we work with other companies who need to access your data, we’ll let you know who we’re working with. In addition, if there is unauthorized access to personally identifiable information or processing equipment or facilities resulting in the loss, disclosure or alteration of this information, we’ll let you know about this.
  • We provide strong security protection for your data. Adherence to ISO 27018 provides a number of important security safeguards. It ensures that there are defined restrictions on how we handle personally identifiable information, including restrictions on its transmission over public networks, storage on transportable media, and proper processes for data recovery and restoration efforts. In addition, the standard ensures that all of the people, including our own employees, who process personally identifiable information must be subject to a confidentiality obligation.
  • Your data won’t be used for advertising. Enterprise customers are increasingly expressing concerns about cloud service providers using their data for advertising purposes without consent. The adoption of this standard reaffirms our longstanding commitment not to use enterprise customer data for advertising purposes.
  • We inform you about government access to data. The standard requires that law enforcement requests for disclosure of personally identifiable data must be disclosed to you as an enterprise customer, unless this disclosure is prohibited by law. We’ve already adhered to this approach (and more), and adoption of the standard reinforces this commitment.

Go read Brad’s article and check out the additional links – it makes for a good read.

Understanding when a hybrid cloud really makes sense: 3 real world examples

“How about a couple of real world examples of situations where a Hybrid cloud makes sense?”. Seems like a simple enough question to be asked by a non technical colleague. I have worked with a lot of different groups and customers who have put together a number of such scenarios to give me enough background to come back with a couple of possible options.

Before we go there – some quick review: If you are looking for a good range of background info on Azure and Hybrid Cloud – I strongly suggest you check out some of the MicrosoftVirtualAcademy Hybrid Cloud Training resources.

How about we keep this simple and focused on a couple of quick hit scenarios where this played out.  Remember, as you have read over the last couple of months – you have multiple methods of connectivity back into your Azure v-nets to get a hybrid solution – here is a quick recap:

  • Point to Site: quick and dirty, easy to setup.  Down side – only one box “on premises” will have access to the Azure environment.
  • Site to Site: Software or Hardware gateways supported.  Much better and “production ready” based on your comfort level.  Software based gateways running Windows Server 2012 / 2012 R2 can meet your needs OR a supported hardware device like a Pix firewall if you prefer a hardware solution. Advantage over Point to Site – you define the subnets that have connectivity to the Azure V-Net and as many machines as you like can now securely access the Azure environment.
  • ExpressRoute: a service provided by one of our partners who probably already provide you WAN connectivity to your branch offices.  Very fast, reliable and low latency. You are literally patching in an Azure vNet like it’s a branch office.

I went from cheapest to most expensive as well as least scalable to most scalable.  With that in mind – lets get back to those three quick hit scenarios:

1) Keep my data on-site, in my datacenter.

I get this one all the time. People are ready to work with a public cloud offering, but want to ease their way in with regards to maintaining control of their data. If your applications can handle being separated from the data back ends and you have available bandwidth to support whatever latency they can work with – this is a simple case for a hybrid solution.

2) Fire up a quick pilot without capital costs.

Say your virtualization hypervisors are all in production and you want to spin up a couple of app servers or setup a trial workload from some vendor.  When you have spent the time to understand how easy it is to establish a Hybrid connection to Azure, you now have “Subnet(s) on demand” with whatever compute power you need – all being billed only for the time they are ON, broken down to by-the-minute charges.  No licensing to worry about, no delay to get hardware spec’ed, ordered and installed, no additional physical demands on your infrastructure.  You also get to determine which portions of your network have access to this secure subnet (no public access to it from the internet unless you open it up). When the pilot isn’t running (off work hours) you shut down machines to suspend billing and start them up when you need them again.  Once the pilot has been completed – you move machines over to on-premises hardware (they are VHD files) or establish a “production” subnet on demand.

3) Disaster Recovery for certain workloads.

I say certain workloads because some workloads are supported in Azure, while others aren’t supported at this time.  But – for those that are, you can establish a Site to Site VPN connection and use a service like Azure Site Recovery to replicate select on-premises servers up to Azure and keep them synchronized.  If you need to do a DR test (or the real thing) you initiate a Site Recovery and Azure handles ensuring the last replication finishes (where possible) and brings the systems up in the properly defined order.  If you have your hybrid connectivity established to your other sites – they would have connectivity to the DR site and services that had been temporarily re-located there by ASR.  This is by far the most complex of the three scenarios – but it’s just damn awesome in its powerful capabilities.

Those are my quick three scenarios where Hybrid cloud solutions just make sense.  What do you think – have you come across any that didn’t make my top 3 list?

Azure IT Pro News Roundup: January 15th, 2015

I was asked by some folks to aggregate news on Azure front for IT Professionals, so I thought I’d try out a regular roundup post pointing to the sources. Hey, you never know – this might catch on…

That’s about it for this round up. Do you have any cool ones to share that I missed? Include them in the comments section!

Succeed in a hybrid world – without losing control of your data

Most of us who work in IT generally feel pretty good about the physical and logical security of our data and systems when it resides on-premises.  If you can see and touch the systems, it gives you an added sense of security – especially when you have keycards, biometrics or sometimes just a lock on the door. Augmenting this on premises is easy enough, most times with built in tools like BitLocker or certificate services for a variety of solutions that you can employ to data wherever it lies.

But then we add in a Hybrid connection – to someplace else where you don’t have physical access to the host systems, only remote access to your Guest VMs?  Public Clouds like Azure need some extra assistance if you want the warm-and-fuzzy feeling of your VHDs encrypted while at rest. Now what do you do in order to get the warm fuzzy feelings of keeping control of your data?

Full Volume Encryption with Bitlocker requires a TPM or physical access to the system while booting. At TechEd Europe, my friend Bryon Surace had a session talking about a new partner that was onboarded for the Azure called CloudLink.  They make a two part solution that allows you to centrally manage encryption keys used for boot time decryption on Windows and Linux images as well as data volumes you attach to your machines.

It’s really quite cool – and simple.  Once you have established an relationship with CloudLink, you download their “Cloudlink Center” virtual appliance (a pre-configured VHD), deploy it to an Azure VM (create new VM from image) and login to the Management portal. You then install an agent on Windows based servers that interfaces between Bitlocker and their CloudLink server.  Once the machine boots – it shows up in the management console and you authorize it for operation. Apparently, this can also be integrated into native Linux data encryption mechanisms as well.

Check out their quick demo video on how this logically works – a video is worth a couple thousand words. 😉

Note: it also works with your Hoster c0-location options as well as in your on-premises Hyper-V and VMware private clouds as well.

Very cool solution.  I know a number of customers I’ve spoken to that could use this to bolster their comfort and security levels – potentially unblocking their plans to integrate Azure and Public Cloud into their environments.

High Level Docker Overview – for IT Pros

A while back I shot a quick video with Madhan Arumugam and Ross Gardler talking about what this containerization technology is all about. Looking for information to understand the concepts and options yourself? Have a look at this High Level explanation / whiteboarding of container technologies and their advantages over individual processes and traditional IaaS Virtual Machines.

Madhan Arumugam, Principal Program Manager with the Azure compute team

Ross Gardler, Sr. Technical Evangelist with Microsoft OpenTech

Azure IaaS Week for IT Pros: Day 4 Speaker Lineup

I talked about this event I’ve been wrangling speakers for a while back, so I thought I’d put together a quick little post brokenhh down by DAY with all the details about speakers and their sessions in one spot. I’ve been capturing these quick intros for each speaker with details about their session and also some fun tidbits about them outside of work. This is the ONE STOP SHOP for day FOUR – a comprehensive list of topics and Speaker Bios you can review before the big event on December 1st through the 4th at http://channel9.msdn.com.

There are four 1 hour sessions per day, each wrapped around a theme. Todays theme is:

Day 4:  Optimize Windows Workload Architecture and Administration Capabilities Within Azure

Identity which of your applications and services are critical in your on-premises environment. Leverage your investment in Active Directory (and other directories) and setup a synchronization with Azure Active Directory to simplify authentication in this Cloud world. You will learn what capabilities each offering has as well as how to configure them to be used for single sign on with over 2000+ Software-As-A-Service offerings like Office365, Salesforce, DropBox enterprise and more.

As an IT Professional managing applications that are currently on premises, you understand what is involved in maintaining all the underpinnings to the IIS Application. What if you could take those applications and move them to a Platform as a Service offering like Azure Websites, where you no longer need to worry about the underlying infrastructure hosting platform, allowing you to focus on what is important to you – your applications. The first step is understanding Azure Websites. Learn how to work with your development team to target a PaaS offering like Azure Websites in order to get ease of scale, load balancing, rapid deployment, staging sites and more

Need a complementary database technology to go along with those PaaS solutions? Want to get some geo-diversified protection that isn’t going to break the bank? With the recent announcements around SQL Azure, more and more organizations are looking at how to leverage these capabilities.

Now that SharePoint is a supported workload on Azure IaaS, the next step is determining the design considerations and limitations that you need to be aware of before deploying your next SharePoint solution. This session has you covered. Learn from the team who has created the detailed spec on how to deploy SharePoint in Azure IaaS and also see how MSIT is doing it for our internal customers.

It is going to be an action packed day!

Don’t forget to register!!!!

Register Now Button

Register through Microsoft Virtual Academy to receive reminder mails for this event and to obtain details for receiving a voucher for 50 percent off the exam price if taken by January 31st. Join the conversation on Twitter using #LevelUpAzure.

Azure IaaS Week for IT Pros: Day 3 Speaker Lineup

I talked about this event I’ve been wrangling speakers for a while back, so I thought I’d put together a quick little post broken down by DAY with all the details about speakers and their sessions in one spot. I’ve been capturing these quick intros for each speaker with details about their session and also some fun tidbits about them outside of work. This is the ONE STOP SHOP for day THREE – a comprehensive list of topics and Speaker Bios you can review before the big event on December 1st through the 4th at http://channel9.msdn.com.

There are four 1 hour sessions per day, each wrapped around a theme. Todays theme is:

Day 3:  Embrace Open Source Technologies (Chef and Puppet Configurations, Containerization with Docker, and Linux) to Accelerate and Scale Solutions

Most people do not believe that Linux and OSS tools / application layers are first class citizens in our Azure services. We have come a long way in a short period of time to reverse that line of thought. Learn from experts in the Linux and OSS world how to design and deploy Linux and supporting technologies in your Azure environment. You can deploy machines from our Gallery or pull images from our VMDepot to get started quickly without having to re-invent the wheel.

If you are a SysAdmin who already supports solutions on the Linux platform at scale, you are probably familiar with Chef and Puppet. Did you know they are also available in Azure for both Linux and Windows Machines? You can take your existing instructions and configurations and configure agents on your IaaS instances to point to your servers for deployment. Learn how to implement these amazing technologies for your Linux and Windows Azure hosted solutions.

You might have heard about containerization technologies like Docker that allow you to achieve greater density and better performance than traditional full-fledged IaaS Virtual Machines. We recently announced support for Docker on Azure and all the goodness that it brings to your solutions. How should an IT Implementer / SysAdmin deploy and leverage this new technology? This session will tell you everything you need to know.

Attend this session to learn how to leverage the scale and infrastructure of Azure to deploy large sets of Linux VMs in the cloud. If you are running a large Linux server farm and are wondering what steps it takes to move it to Azure, this session will provide you with guidance and best practices. You will see how to script and simplify this effort as well as learn key considerations and principles. We will also walk through best practices for configuring and deploying the data tier of your workload.

It is going to be an action packed day!

Don’t forget to register!!!!

Register Now Button

Register through Microsoft Virtual Academy to receive reminder mails for this event and to obtain details for receiving a voucher for 50 percent off the exam price if taken by January 31st. Join the conversation on Twitter using #LevelUpAzure.

PlayPlay

Azure IaaS Week for IT Pros: Day 2 Speaker Lineup

I talked about this event I’ve been wrangling speakers for a while back, so I thought I’d put together a quick little post broken down by DAY with all the details about speakers and their sessions in one spot. I’ve been capturing these quick intros for each speaker with details about their session and also some fun tidbits about them outside of work. This is the ONE STOP SHOP for day TWO- a comprehensive list of topics and Speaker Bios you can review before the big event on December 1st through the 4th at http://channel9.msdn.com.

There are four 1 hour sessions per day, each wrapped around a theme. Todays theme is:

Day 2:  Dive Deep into Networking, Storage and Disaster Recovery Scenarios

. YushWang

Yu-Shun Wang is a Program Manager in Azure Networking team working on Hybrid Connectivity and Network Virtual Appliances in Azure. In his free time, he likes camping, skiing, and swimming.

Your infrastructure in Azure can be isolated and purpose built for public consumption. How do you design your virtual networks and access controls to ensure everything is as desired? Now that you understand the fundamentals of Virtual Networks in Azure, let’s dive deeper into hybrid connectivity options using Site to Site VPNs, allowing your datacenter to extend to your Azure environment.  Concerned about speed and latency for your applications? Don’t want your internal data being routed over the public internet? Explore what ExpressRoute can offer as the ultimate connectivity option

Unlike other cloud providers, our story on storage is consistent across all of our Azure offerings. This means as new technologies are introduced to our storage platform, they become available to all of our services in a timely manner. Learn how to get the maximum IOps for your workloads and what new technologies are coming soon.

Disaster Recovery and OffSite replicas of critical workloads used to be a very costly and complex thing to orchestrate. With System Center and Microsoft Azure, you can leverage Azure Site Recovery to coordinate a site failover from one datacenter to another. We recently announced that Microsoft datacenters could be a possible failover target, provided you are aware of the dependencies. Lean how to set up Azure Site Recovery to use your Azure environment as your replica destination to coordinate a failover before disaster strikes. If you’re not ready for a full solution, but looking to migrate specific machines from On-Premises to Azure, we have you covered in this session as well. See how InMage can migrate your workloads from VMware, AWS and Hyper-V to your Azure IaaS environment.

When transitioning your skillset from on-premises to a more hybrid cloud environment, you have to also transition your management techniques from GUI and One-Off solutions to a more reproducible Automated environment.  Learn the ins and outs of Azure Automation Services and the new Automation Script Gallery. Leverage existing scripts or publish your own for others to use in their environments. Looking to take this to the next level – bring in some new DesiredStateConfiguration skills to standardize, configure and maintain your deployments at scale in the cloud.

It is going to be an action packed day!

Don’t forget to register!!!!

Register Now Button

Register through Microsoft Virtual Academy to receive reminder mails for this event and to obtain details for receiving a voucher for 50 percent off the exam price if taken by January 31st. Join the conversation on Twitter using #LevelUpAzure.

PlayPlay

Azure IaaS Week for IT Pros: Day 1 Speaker Lineup

I talked about this event I’ve been wrangling speakers for a while back, so I thought I’d put together a quick little post broken down by DAY with all the details about speakers and their sessions in one spot. I’ve been capturing these quick intros for each speaker with details about their session and also some fun tidbits about them outside of work. This is the ONE STOP SHOP for day ONE – a comprehensive list of topics and Speaker Bios you can review before the big event on December 1st through the 4th at http://channel9.msdn.com.

There are four 1 hour sessions per day, each wrapped around a theme. Todays theme is:

Day 1:  Establish the Foundation: Core IaaS Infrastructure Technical Fundamentals

Mark Russinovich will share his vision on what Azure is today and where Microsoft is taking it in the future. As one of the most trusted Technical Fellows at Microsoft and with his new role as CTO for Azure, Mark has a unique perspective of how Azure is architected and what individuals can do to get the most out of this platform.

Who better than Corey Sanders to take us through the nuts and bolts of what Azure IaaS is and how it works for IT professionals. Corey leads the team that designed and created Azure IaaS and is very passionate about all that it can do. He will move rapidly from the fundamentals to advanced configurations and how to use and interact with your environment.

AND

Drew picks up where Corey left off, going even deeper. He’ll focus on the specifics of how to get the most from your Windows workloads with Azure IaaS.

How do you design your IaaS implementations for optimal performance, maximum redundancy and geo-diversity? How do new machine series gain you better performance and leverage new hardware capabilities in our datacenters? Learn how to design your systems to minimize reboots and downtime due to user error or planned maintenance. This session has it all the answers and more.

It is going to be an action packed day!

Don’t forget to register!!!!

Register Now Button

Register through Microsoft Virtual Academy to receive reminder mails for this event and to obtain details for receiving a voucher for 50 percent off the exam price if taken by January 31st. Join the conversation on Twitter using #LevelUpAzure.

PlayPlay

Warning: sizeof(): Parameter must be an array or an object that implements Countable in D:\home\site\wwwroot\wp-content\plugins\projectnami-blob-cache\project-nami-blob-cache.php on line 416

Fatal error: Uncaught WindowsAzure\Common\ServiceException: Fail: Code: 400 Value: The account being accessed does not support http. details (if any): <?xml version="1.0" encoding="utf-8"?><Error><Code>AccountRequiresHttps</Code><Message>The account being accessed does not support http. RequestId:26eacde9-501e-0007-793c-02c9c6000000 Time:2022-11-27T08:45:00.4198941Z</Message><AccountName>ritgcache</AccountName></Error>. in D:\home\site\wwwroot\wp-content\plugins\projectnami-blob-cache\library\WindowsAzure\Common\Internal\Http\HttpClient.php:382 Stack trace: #0 D:\home\site\wwwroot\wp-content\plugins\projectnami-blob-cache\library\WindowsAzure\Common\Internal\Http\HttpClient.php(275): WindowsAzure\Common\Internal\Http\HttpClient::throwIfError() #1 D:\home\site\wwwroot\wp-content\plugins\projectnami-blob-cache\library\WindowsAzure\Common\Internal\RestProxy.php(141): WindowsAzure\Common\Internal\Http\HttpClient->send() #2 D:\home\site\wwwroot\wp-content\plugins\projectnami-blob-cache\library\WindowsAzure\Common\Internal\S in D:\home\site\wwwroot\wp-content\plugins\projectnami-blob-cache\library\WindowsAzure\Common\Internal\Http\HttpClient.php on line 382