Category Archives: Azure

First look at Azure Container Service

In Episode 115 of the Microsoft Cloud Show we interviewed Ross Gardler from Microsoft about their new Azure Container Service which is currently in preview. I finally got some time a few weeks ago to play with ACS and thought I would share my first experiences here.  This is my 0 to first container experience.

Currently ACS allows you to provision 2 types of container service. Either a Mesos based deployment or a Swarm one. I hadn’t played with Meso much so opted to try that out.

Getting started

Note: I followed the getting started guide for deploying a new container service on the Azure website.

To get started its as simple as clicking a “Deploy to Azure” button on the pre-canned Azure template.   This will take you to your Azure management console where you can configure the various parameters for this template as shown below.

2016-02-20_16-10-37

You need to name your cluster, pick the VM size for the nodes you want to run and set authentication details.  The toughest part of this for most people will be generating the SSH keys as this is pretty foreign to many Windows folks.  But they provide a fairly simple walkthrough for you to create a key pair.

When complete you hit OK and go get a coffee while your cluster is deployed 🙂  It can take a while as it spins up a few machines and configures everything.

deploy

Note: I got an error “\”The subscription is not registered to use namespace ‘Microsoft.Compute’.” during deployment the first time.  I was deploying into a new MSDN Azure Subscription with free credit on it.  Turns out I needed to manually create a VM in this subscription first (any VM will do) before deployment of a template would work.  Once I had done this the template deployed fine.

I deployed a pretty simple cluster with 2 agent nodes and a mesos master node.  In Azure you can see all the resources the template created in a new resource group such as the VMs, networks and security groups etc…

2016-02-20_16-48-48

Now I had a cluster up and running I could log into Mesos.  To find the URL click “Succeeded” on the resource groups deployment status and click “Microsoft.Template”.  You should see a couple of fully qualified domain names.

2016-03-14_11-31-39

To actually hit Meso you need to create an SSH tunnel from your box into the cluster.  There is a decent write up on how to do this here.

Once you have your SSH tunnel running you can hit the Mesos web interface on http://localhost/mesos/  (this is redirected over the SSH tunnel to your meso box running in Azure).

2016-02-21_10-22-35

Now you are ready to start running things!  Hit http://localhost/marathon/ to open the Marathon web UI which makes it pretty simple to run jobs on your cluster.

Click create and give it a name, 256MB and 1 instance.  Open the Docker container settings and specify “yeasy\simple-web” as the image name.  Then in the Optional Settings area set Port = 80.  This will map port 80 in the docker container to port 80 on the host. Create the app and let it spin up.  You should see it in the UI similar to this:

2016-02-21_10-25-17

Grab your load balancers fully qualified domain name from the Azure portal.  It’s the AGENTFQDN url in the deployment details you found earlier.

You should be able to hit that URL and see your simple website running!

Summary

This is obviously only the most basic thing you can do with a Meso based cluster running in Azure, but was my attempt and seeing how Azure are approaching the setup.  All in all it was surprisingly painless.

The goal of ACS right now so it make it simple to run a docker cluster in Azure using either Mesos or Swarm.  It doesn’t take away the need to manage that cluster in Azure once its deployed, so you will need people who know how to run a Mesos cluster and feed and water it appropriately.  Deployment is step one, but running it is a different beast all together from what I understand.  I am no expert in this area and so you will want to tread carefully and make sure you have the appropriate skills on staff to do this.

I for one would LOVE to see Azure also add as Container as a Service (CaaS) offering where you just specify how much compute you want, how much memory etc… and then have Azure spin up and manage a Docker cluster for you with the infrastructure being invisible.  This way you don’t need to be a Mesos master and you can let the pros run it for you.

I think CaaS is the final destination for Docker … just prior to everyone starting to espouse the virtues of true Platform as a Service (PaaS) and ditching this whole concept of apps running in containers and being aware of the OS at all.

When true CaaS comes to fruition, like it think it will in time, maybe Ray Ozzie (inventor of Azure, codename Red Dog) can all say “told ya so” about his vision of Platform as a Service being the ultimate destination for cloud computing (but being about 10 years too early).

Running apps using Docker Cloud (aka Tutum)

Anyone who has listen to me rant on about how interesting Docker is on the Microsoft Cloud Show may have caught me talking about Tutum.  

imageThe short story on Tutum is that it provides an easy to use management application over Virtual Machines that you want to run your apps on with Docker.  It is (sort of) cloud provider agnostic in that it supports Amazon Web Services, Microsoft Azure and Digital Ocean among others.

It was bought by Docker late last year and recently was recently re-released as Docker Cloud

What does it provide?

At a high level you still pay for your VMs wherever you host them, but Docker Cloud provides you management of them for 2c an hour (after your first free node) no matter how big or small they are.   You write your code, package it in a Docker Image as per usual and then use Docker Cloud to deploy containers based on those docker images to your Docker nodes. You can do this manually or have it triggered when you push your image to somewhere like Docker Hub as part of a continuous integration set up.

Once you have deployed your app (“Services” in Docker Cloud terminology) you can use it to monitor them, scale them, check logs, redeploy a newer version or turn them off etc…  They provide an easy to use Web App, REST APIs and a Command Line Interface (CLI).

So how easy is it really?

Getting going …

The first thing you have to do is connect to your cloud provider like Azure.  For Azure this means downloading a certificate from Docker Cloud and uploading it into your Azure subscription.  This lets Docker Cloud use the Azure APIs to manage things in your subscription for you. (details here)

Once you have done that you can start deploying Virtual Machines, “Nodes” in Docker Cloud terminology.  Below I’m creating a 2 node cluster of A2 size in the West US region of Azure. 

image

That’s it.  Click “Launch node cluster” wait a few mins (ok quite a few) and you have a functional Docker cluster up and running in Azure.

image

In Azure you can take a look at what Docker Cloud has created for you.  Note that as of the time of writing that Docker Cloud is provisioning “Classic” style VMs in Azure and not using the newer ARM model.  They also deploy different VMs into their own cloud services and resource groups which isn’t good for production.  That said, Docker Cloud let you Bring Your Own Node (BYON) which lets you provision the VMs however you like, install the Docker Cloud agent on them and then register them in Docker Cloud.  Using this you can deploy your VMs using ARM in Azure and configure them however you like.

image

Deploy stuff …

Now you have a node or two ready you can start deploying your apps to them!  Before you do this you obviously need to write your app … or use something simple like a pre-canned demo Docker Image to test things out.

Docker Cloud makes this really simple through “Services”.  You create a new service, tell it where it should pull the Docker Image from and a few other configuration options like Ports to map etc… Then Create and your containers will be deployed to your nodes.

Try this once you have your nodes up and running.  Click Services in the top nav,  then Create Service. Under Jumpstarts & Miscellaneous category you should see the “dockercloud/hello-world” image. Select it and then set it up like this:

image

There are only a two things I changed from the default setup.

  1. I moved the slider to 2 in order to deploy 2 containers
  2. Mapped Port 80 of the container to Port 80 of the node and clicked Published.  This maps port 80 of the VM to port 80 of the container running on it so that we can hit it with a web browser.
  3. High availability in the deployment strategy.  This will ensure that the containers are spread across available nodes vs. both on one.

Click “Create and deploy” and you should see your containers starting up.   Pretty simple huh!

image

Note: There is obviously a lot more available via configuration for things like environment variables and volume management for data that you will eventually need to learn about as you develop and deploy apps using Docker.

Once your containers are deployed you will see them move to the running state:

image

Now you have two hello world containers running on your nodes.  If you go back to your list of Nodes you should see 1 container running on each:

image

I want to see the good man!

You can test your hello world app out by hitting its endpoint.  You can find out what that is under the Service you created in the Endpoints information.

image

  1. This is the service endpoint.  It will use DNS round robin to direct requests between your two running containers.
  2. These are the individual endpoints for each container.  You can hit each one independently.

Try it out!  Open the URL provided in a browser and you should see something like this:

image

Note that #1 will indicate what container you are hitting.

Want more containers?  Go into your Service and move the slider and hit apply.

image

You will get an error like this:

image

This is because we mapped port 80 of the Node to the Container and you can only do that mapping once per Node/VM. i.e. two containers cant both be listening on port 80.  So unless you use a HAProxy or similar to load balance your containers you will be limited to one container on each node mapped to port 80.  I might write up another post about how to do this better using HAProxy.

Automate all the things …

We are a small company and we want to automate things as much as possible to reduce the manual effort required for mundane tasks.  We have opted to use Docker Cloud for helping us deploy containers to Azure as part of our continuous integration and continuous deployment pipeline.

In a nutshell when a developer commits code it goes through the following pipeline and automatically is deployed to our staging environment:

  1. Code is committed to GitHub
  2. Travis-CI.com is notified and it pulls the code and builds it.  Once built it creates a Docker Image and pushes it to Docker Hub.
  3. Docker Cloud is triggered by a new image.  It picks it up and redeploys that Service using the new version of the image.

This way a few minutes after a developers commits code the app has been built and deployed seamlessly into Azure for us.  We have a Big Dev Ops Flashing Thing hanging on the wall telling us how the build is going.

Cool … what else …

At Hyperfish we have been using Tutum for a while during its preview period with what I think is great success.  Sure, there have been issues during the preview, but on the whole I think it has saved us a TON of time and effort setting up and configuring docker environments.  Hell I am a developer kinda guy, not much of a infrastructure one and I managed to get it working easily which I think is really saying something 🙂

Is this how you will run production?

Not 100% sure to be honest.  It is certainly a fantastic tool that helps you run your apps easily and quickly.  But there is a nagging sensation in the back of my head that it is yet another service dependency that will have its share of downtime and issues and that might complicate things.  But I guess you could say that about any additional bit of technology you introduce and take a dependency on. That said, traffic to and from your apps is not going through Docker Cloud, traffic goes direct to your nodes in Azure so if they have brief downtime your app should continue to run just fine.

I have said that for the size we currently are and with the team focusing on building product that we might consider something else only once we can do a better job that it does for us.

We might consider something else only once we can do a better job that it does for us.

All in all I think Docker Cloud has a lot of great things to offer.   It will be interesting to compare and contrast these with the likes of Azure’s new service, Azure Container Service (ACS) as it matures and approaches General Availability.  It’s definitely something we will look at also.

-CJ

Sandbox code is “deprecated”, long live the Sandbox

we have deprecated the use of custom managed code within the sandboxed solution – Brian Jones, Principal Program Manager, Apps for SharePoint

It’s been a long time coming and widely anticipated that this announcement would come at some point.  It’s great to see the announcement and the clarity many have been asking for.

I feel like I’m in a good position to comment on this and give some background about why I think deprecating code based sandbox solutions is good idea. I was on the SharePoint engineering team when the sandbox was being built and for a period of time I was the Program Manager for the feature.

Background:  Sandbox solutions in SharePoint were introduced in SharePoint 2010.  They allowed a packaged set of assets and code to the uploaded to a SharePoint site.  That can consist of declarative components like XML for adding things like List Templates, as well as compiled code for things like Web Parts or Event Receivers.

When the Sandbox was being designed and built it was about 2 – 3 years prior to SharePoint 2010 being released. Azure, and cloud computing in general, either didn’t exist or was in its infancy. SharePoint needed a way to upload customizations/components to SharePoint sites where the administrators were not comfortable with installing Farm Solutions aka. Full trust solutions. Microsoft itself was a perfect example of this.  If I built a web part I, as a Microsoft employee at the time, couldn’t load that onto our SharePoint sites that MS IT ran.  No 3rd party web parts or products. This was a common problem in many large organizations and we heard about this time and time again with customers.

Sandbox Solutions were the answer. They allowed users to upload a solution and have SharePoint run it while it being controlled, secured and run in a sandboxed process. The main thing this gave SharePoint was the ability to isolate the code in that solution and ensuring that if it crashed or was badly behaved that it didn’t break the rest of the SharePoint environment.

The problem was that Sandbox Solutions was a feature added in Windows SharePoint Services (WSS) and that was a different product from SharePoint Portal Server (SPS) that built on top of WSS.  The API set in the Sandbox that was available was limited to WSS APIs and even then only a subset of them. There were good reasons for this, but at the end of the day it was very limiting for people.  Ideally it would have been great to have lots of SPS APIs available too. But that didn’t happen (different story).

So that is the background about how/why Sandbox came about.

… now fast forward to today …

In a nutshell the Sandbox was a good solution to the problem faced when it was designed. However, it’s not a great solution to the problem given the technology we have today.

Why is it no good today you ask?

A lot happened after the Sandbox was designed and built.  Cloud computing took off, new advances in code in all sorts of places got easier e.g. isolated apps on a phone.  A lot was learnt. 

In short its my belief that SharePoint shouldn’t have been trying to replicate an isolated code hosting environment.  That is reinventing the wheel and there are other teams at MS who build products to do this extremely well already.  Namely IIS and Azure.

Think about that for a second.  Imagine being given some arbitrary code and told to run it, but doing it in a way that was safe, secure, manageable and fault tolerant.  It’s actually quite a tough challenge. If you say it’s easy then you should try doing it instead of talking about it 🙂 (tip with wider life applicability too)

So today MS clarified that SharePoint was getting out of the code hosting game.  Why?  Because it was limited and there are better solutions to this problem today. 

The new SharePoint app model is designed to solve this by moving “sandbox” code to an alternative host e.g. IIS or Azure or <insert thing that runs code here>.

Sandbox code might be “dead”, but the new app model IS the new Sandbox!

I see the reasons why the sandbox came about the same as I do for the new app model.  They are solving the same problem.  How do you allow someone to customize, extend and build new things on SharePoint without compromising the integrity of SharePoint itself?  That is the goal.

Today in SharePoint 2013 and Office 365 we have the ability to build solutions that use this new app model.  Sure, it’s not perfect in the APIs it provides and there is plenty of scope for adding things.  I am certain it will evolve to cater for more of these things over time.  That said, it is MUCH better suited for the long term.  I for one am loving the ability to use all the latest dev tools and technologies in Azure that were not possible in SharePoint previously.

The app model may not be applicable or possible to be used for everyone today and that is fine. It’s going to develop and that will change over time would be what I bet on.  But it is the right path moving forward.  This might cause some pain for people in the short term and I understand the frustration people have with changes like this. But I would MUCH rather have to deal with this change than be limited to a inferior set of capabilities in the longer term.  This is the right move long term (in my humble opinion).  Short term pain, long term gain.

Getting SharePoint out of the code hosting solution was the right things to do and I applaud the team for clarifying there position on this.

I look forward to the SharePoint Conference in March where hopefully (fingers crossed) we will hear more about the future of the new app model and how it will address its shortfalls today.

-CJ

Ep 10 of the Microsoft Cloud Show – News on Azure, Google Compute Engine and Amazon and more

I can’t believe we just hit episode 10 of the Microsoft Cloud Show!  It feels like a mini milestone.

Episode 10 is jam packed with news and updates from the Amazon re:Invent conference, as well as news on the newly released Google Compute Engine & , of course, lots of Azure goodies too.

Go get it and let AC and I assault your ear buds.

Episode 010 – Latest news in the cloud from Microsoft, Amazon and Google

Thanks for the amazing support so far with the podcast.  We had > 7000 downloads over the past month or so which is astounding!

-CJ

Managing your Azure cloud costs with Kerrb

One of the big problems developers and organizations have using cloud services like Azure is the potential for the costs to go crazy if you don’t shut your dev, test or temporary Virtual Machines off. Sometime back Andrew Connell and I got talking about and had an idea for an online service that would help you manage those costs.  We talked with some people and found found loads of people that were concerned with using Azure and Amazon Web Services because of these cost overrun type of issues.

KerrbSo we decided to fix it …  Introducing Kerrb.

Kerrb is a SaaS product designed to save you money by automatically turning off Azure VMs that you forget about.  If you forget to turn off a virtual machine Kerrb will make sure it’s turned off on a schedule that you decide on.

Kerrb is still being built, but you can sign up for the launch list and be one of the first to get access when it is ready.  We will send you updates on how development is progressing and finally give those on the launch list the opportunity to sign up and test out the system when it’ ready. Also as an added bonus, if you are on the launch list then we will honor the pricing we have up on the site, even if we decide to tweak it prior to launch.

Kerrb will start small and evolve quickly as demand and feedback drives the product development. The high priority “Pri 0” [1]  feature is to turn off Virtual Machines in Azure if you forget, but we have a lot of other great features on the roadmap including adding Amazon Web Services as well as support for other leading cloud providers.

Keep up to date with developments and help us get the word out by:

  1. Signing up for the launch list
  2. Liking Kerrb on FaceBook
  3. Keeping an eye on the Blog for updates and news
  4. Follow @KerrbApp on Twitter

Have a read of a blog post Andrew wrote on the Kerrb blog here: Using VMs for Dev, Test & Show – Perspectives from an Indie Consultant, Trainer and Presenter

And something I wrote about Managing cloud spend in a development organization

We look forward to hearing your comments and feedback!

-CJ

[1] Pri 0 – Microsoft speak for the highest priority features in product development. You have to have all the Pri 0’s.

Introducing the Microsoft Cloud Show podcast

Microsoft Cloud ShowThe only place to stay up to date on everything going on in the Microsoft cloud world including Azure and Office 365.

Quite some time ago I bugged pestered asked my good friend Andrew Connell (AC) if he would be interested in starting a SharePoint podcast.  Given he is a busy guy with a lot on his plate he wasn’t so sure it was a good idea to begin with.  After all, we both said that if we were going to embark on something like this that we wanted to do it right.

We ultimately decided to broaden the shows scope to include not only SharePoint … but rather take on talking about the whole Microsoft cloud story.  After all, Microsoft are one of the largest Enterprise players in the market and there is A LOT going on in their cloud offerings.

The Microsoft Cloud Show was born.

Our aim with the show is to bring you news, information and commentary about all things going on in the Microsoft cloud world. We want to invite the listener into the show via way of audio and email questions sent in. We want to keep a consistent delivery of shows that you can count on.  Most of all we want it to be easily digestible and will try to stick to 30mins per show.

We are not pretending to be professional podcasters here and we will likely learn a lot along the way.  But we hope you will join us for the journey.

We are launching the podcast in iTunes (MS marketplace coming soon) with 3 episodes. These are just introductory shows and talk about our motivations for the show & then a background show on each host. We will be getting into the meat and potatoes in Episode 4 which is coming soon.  Ideally we would love to settle into a fortnightly show.

We would love to hear your feedback! We would love to hear about topics that interest you and that you would like to hear us address on the show.

In the meantime … please enjoy this screenshot from us recording our first show.

image

-CJ

Extending your Azure AD tenant to include Office 365 services

I learnt something today that I thought would be interesting to share in the hope someone else won’t need to do the research.

Say you already have Windows Intune or Azure AD already up and running and now you are ready to give Office 365 a go.

You have a couple of choices:

  1. Create a new Office 365 tenant
  2. Extend your existing Azure AD tenant and add Office 365 services.

The correct way to do things is to Extend your existing tenant and add Office 365 services.  If you have Azure AD already you are likely DirSync to push all your user accounts from your on-prem AD to Azure AD.  It makes sense that those are the same users you want to access Office 365 no doubt.

If you try and create a new tenant and then do DirSync to that tenant you will most likely hit issues with trying to push the same users to two different Azure AD tenants.

Extending is the way to go.

If you sign into the Office 365 management portal using your current credentials you use for Azure AD/Intune you will see a page like this:

image

You will notice it is saying that you are not currently subscribed to any Office 365 services.

So how do you go about adding those?

Jump over to the “purchase services” tab in the left navigation and you will get a selection of the various plans (aka SKUs) available.  In my case I picked the E3 –Trial.

image

This will then add the services included to your tenant. Once provisioning is complete you can carry on with the other tasks you might like to do like setting up Identity Federation (ADFS) etc…

It seems blatantly obvious now I have tried this and this is possibly hardly worth a blog post, but until now I had always started from the Office 365 side of things and had never looked at starting with Azure AD and adding Office 365.

Turns out to be dead simple 🙂

-Chris.

Sharepoint Provider Hosted Apps in Azure monitoring tip

One of the tips I gave to during my session at TechEd North America this year was about using SignalR in your SharePoint provided hosted applications in Azure.

One of the pain points for developers and people creating provider hosted apps is monitoring them when they are running in the cloud. This might be just to see what is happening in them, or it might be to assist with debugging an issue or bug.

SignalR has helped me A LOT with this.  It’s a super simple to use real time messaging framework. In a nutshell it’s a set of libraries that let you send and receive messages in code, be that in JavaScript or .Net code.

So how do I use it in SharePoint provider hosted apps in Azure to help me monitor and debug?

A SharePoint Provider Hosted App is essentially a web site that provides parts of your app that surface in SharePoint through App Parts or App Pages etc… It’s a set of pages that can contain code behind them as any regular site does.  It’s THAT code that runs that I typically want to monitor while its running in the Azure (or anywhere for that matter).

So how does this work with SignalR?

SignalR has the concept of Hubs that clients “subscribe” to and producers of messages “Publish” to.  In the diagram below App Pages code publish or produce messages (such as “there was a problem doing XYZ”) and consumers listen to a Hub and receive messages when they are published.

image

In the example I gave at TechEd I showed a SharePoint Provider Hosted App deployed in Azure that Published messages whenever anyone hit a page in my app.  I also created a “Monitor.aspx” page that listened to the Hub for those messages from JavaScript and simply wrote them to the page in real-time.

How do you get this working? It’s pretty easy.

Part 1: Setting up a Hub and publishing messages

First add SignalR to your SharePoint Provider Hosted app project from Nuget. Click the image below for a bigger version showing the libraries to add.

image

Then in your Global.asax.cs you need to add a Application_OnStart like this.  It registers SignalR and maps the hub urls correctly.

protected void Application_Start(object sender, EventArgs e)
{
    // Register the default hubs route: ~/signalr
    RouteTable.Routes.MapHubs();
}

Note: You might not have a Global.asax file in which case you will need to add one to your project.

Then you need to create a Hub to publish messages to and receive them from.  You do this with a new class that inherts from Hub like this:

public class DebugMonitor : Hub
{
    public void Send(string message)
    {
        Clients.All.addMessage(message);
    }
}

This provides a single method called Send that any code in your SharePoint Provider Hosted app can call when it wants to send a message. I wrapped this code up in a short helper class called TraceCaster like this:

public class TraceCaster
    {
        private static IHubContext context = GlobalHost.ConnectionManager.GetHubContext<DebugMonitor>();
        public static void Cast(string message)
        {
            context.Clients.All.addMessage(message);
        }
    }

This gets a reference to the Hub called “context” and then uses that in the Cast method to publish the message.  In code i can then send a message by calling:

TraceCaster.Cast(“Hello World!”);

That is all there is to publishing/sending a simple message to your Hub. 

Now the fun part … receiving them 🙂

Part 2: Listening for messages

In my app I created a new page called Monitor.aspx. It has no code behind, just client side JavaScript. In that code it first references some JS script files: JQuery, SignalR and then the generic Hubs endpoint that SignalR listens on.

<script src=”/Scripts/jquery-1.7.1.min.js” type=”text/javascript”></script>
<script src=”/Scripts/jquery.signalR-1.1.1.min.js” type=”text/javascript”></script>
<script src=”/signalr/hubs” type=”text/javascript”></script>

When the page loads you want some JavaScript that starts listening to the Hub registers a function “addMessage” that is called when the message is sent from the server.

$(function () {
    // Proxy created on the fly         
    var chat = $.connection.debugMonitor;

    // Declare a function on the chat hub so the server can invoke it         
    chat.client.addMessage = function (message) {

        var now = new Date();
        var dtstr = now.format(“isoDateTime”);

        $(‘#messages’).append(‘[‘ + dtstr + ‘] – ‘ + message + ‘<br/>’);
    };

    // Start the connection
    $.connection.hub.start().done(function () {
        $(“#broadcast”).click(function () {
            // Call the chat method on the server
            chat.server.send($(‘#msg’).val());
        });
    });
});

This code uses the connection.hub.start() function to start listening to messages from Hub.  When a message is sent the addMessage function is fired and we can do whatever we like with it. In this case it simply adds it to an element on the page.

All going well when you are running your app you will be able to open up Monitor.aspx and see messages like this flowing:

image

If you don’t see messages flowing you probably have a setup problem with SignalR.  The most common thing I found when setting this up was the client not being able to correctly reference the SignalR JS or Hub on the server.  Use the developer tools in IE or Chrome (or Fiddler) to check that the calls being made to the server are working correctly (see below for what working should look like):

image

If you are sitting there thinking “What if I am not listening for messages? What happens to them?” I hear you say!  Well, unless someone is listening for the messages they go away. They are not stored. This is a real-time monitoring solution. Think of it as a window into listening what’s going on in your SharePoint Provider Hosted app.

There are client libraries for .Net, JS, iOS, Android too. So you can publish and listen for messages on all sorts of platforms.  Another application i have used this on is for simple real time communication between Web Roles in Azure and Web Sites in Azure.  SignalR can use the Azure Service Bus to assist with this and its pretty simple to set up.

Summary

I’m an developer from way back when debugging meant printf.  Call me ancient but I like being able to see what is going on in my code in real time. It just gives me a level of confidence that things are working the way they should.

SignalR coupled with SharePoint Provided Hosted Apps in Azure are a great combination.  It doesn’t provide a solution for long term application logging, but it does provide a great little realtime windows into your app that I personally love.

If you want to learn more about SignalR then I suggest you take a look at http://www.asp.net/signalr where you will find documentation and videos on other uses for SignalR.  It’s very cool.

Do I use it in production? You bet!  I use it in the backend of my Windows Phone and Winodows 8 application called My Trips as well as in SharePoint Provider Hosted Apps in Azure. Here is a screen shot from the My Trips monitoring page, I can watch activity for various devices registering with my service etc…

image

Happy Apping…

-CJ

Using SharePoint VMs in Windows Azure with VPN access

At TechEd North America 2013 my good friend Paul Stubbs did a session with Michael Washam and Corey Sanders on:  IaaS: Hosting a Microsoft SharePoint 2013 Farm on Windows Azure

This was a really cool session that focused on building out SharePoint farms in Azure.  One of the things they talked about was a set of PowerShell scripts that they have built to fully automate this process.

The “SharePoint 2013 Automated Deployment Master Scripts” is a set of scripts that you can grab from GitHub that will automate end to end the process of creating, configuring and setting up SharePoint in Azure IaaS VMs.  They let you build out two farm types right now:

  • Single VMs – 3 VMs (DC, SQL and SharePoint)
  • Highly Available – 9 Vms (DC x2, SQL x2, 1 Quorum VM, SP App servers x2, SP WFE x2)

I watched the session in the online recordings after the TechEd and it interested me enough to try it out. 

So I did! and I have to say it totally rocks!

The PowerShell all runs on your client machine and uses the Azure PowerShell Cmdlets to remotely setup Azure etc… Getting your machine setup before you run the scripts is a little tricky as it uses CredSSP for delegation which requires some manual setup, but if you follow the Wiki word for word you should be fine (here).  My CredSSP setup failed because the Windows Remote Management service wasn’t running.

image

I wont get into all the steps for how to set up and run the scripts as that is well documented:

https://github.com/WindowsAzure/azure-sdk-tools-samples/wiki/Automated-Deployment-of-SharePoint-2013-with-Windows-Azure-PowerShell

After you run the scripts you end up with a fully configured SP Farm running in Azure.  Nice!  I opted for the SingleVMs option … so 3x VMs in total.

image

All those VMs sit on a Virtual Network that is also configured for you:

image

Once the whole script has run (warning can take a couple of hours) you can RDP into the VMs … or, just navigate to the default Sharepoint site that is configured. The script also outputs the admin credentials and the sites that were created:

Credentials: corp\spadmin Password: *********
Created Farm on http://sp-foo.cloudapp.net
Created Admin Site on http://sp-foo.clouadpp.net:20000

If you are looking for a quick and easy way to get started with building a SharePoint environment out on Azure then this is a great way to get started.

The next thing I wanted to do was to connect my personal machine into the same virtual network that the VMs run on.  Azure provides the ability to set up a point-to-site VPN that lets you do this.  Once you have configured this you will end up with a VPN connection from your machine into the network with your VMs in Azure.  This makes working with the whole setup a bunch easier and you can use your local machine for development and connect to the farm seamlessly.  You could even join your machine to the AD Domain that was automatically created for you if you wanted.

I did this & although its pretty complex it was pretty neat to finally get up and running.

A good starting point is this guide: Configure a Point-to-Site VPN in the Management Portal

HOWEVER!  One of the steps in that guide is creating a Gateway.  This is the VPN endpoint that your client PC connects to.  However, the steps in that guide assume you are creating a new  virtual network whereas in my case the virtual network was already created.  This means the settings referred to in the guide above are not available!  In particular i couldn’t modify the “Configure point-to-site connectivity” checkbox as it was disabled.  This took me quite some time to figure out.

First you need to Export your configuration using the Export button on the Virtual Network:

image

Then you need to modify the Export XML file and add the bits in bold below:

  <VirtualNetworkConfiguration>
    <Dns>
      <DnsServers>
        <DnsServer name=”DC1″ IPAddress=”10.20.2.4″ />
      </DnsServers>
    </Dns>
    <VirtualNetworkSites>
      <VirtualNetworkSite name=”SPAutoVNet” AffinityGroup=”SPAutoVNet-AG”>
        <AddressSpace>
          <AddressPrefix>10.20.0.0/16</AddressPrefix>
        </AddressSpace>
        <Subnets>
          <Subnet name=”AppSubnet”>
            <AddressPrefix>10.20.1.0/24</AddressPrefix>
          </Subnet>
          <Subnet name=”DCSubnet”>
            <AddressPrefix>10.20.2.0/24</AddressPrefix>
          </Subnet>
   <Subnet name=”GatewaySubnet”>
        <AddressPrefix>10.20.3.0/24</AddressPrefix>
    </Subnet>
        </Subnets>
        <DnsServersRef>
          <DnsServerRef name=”DC1″ />
        </DnsServersRef>
    <Gateway>
        <VPNClientAddressPool>
            <AddressPrefix>10.0.0.0/24</AddressPrefix>
        </VPNClientAddressPool>
    </Gateway>

      </VirtualNetworkSite>
    </VirtualNetworkSites>
  </VirtualNetworkConfiguration>
</NetworkConfiguration>

This adds a gateway subnet and an address pool for client PCs that connect via VPN.

Then you can reimport that configuration and your setup will be updated to include those settings.

image

Then you should see the point-to-site settings correctly setup like this:

image

You can then move on to creating the Gateway you need on that virtual network:

When you do that it can take a while to create the gateway … so be patient 🙂 it took about 10mins for me.

If that works correctly you will see your gateway setup properly along with its external IP etc…

image

Now comes the fun part … CERTIFICATES!  The VPN uses client certificates to authenticate and so you need to create and upload a root cert to Azure as part of this. These steps are detailed in the MSDN guide here: http://msdn.microsoft.com/en-us/library/windowsazure/dn133792.aspx#bkmk_VPNCertificates

It’s a bit fiddly, but in a nutshell you create a root certificate, upload it to Azure, create a client certificate off that root certificate & then load that on your client PC.  Below is my root cert uploaded to Azure:

image

Finally you get to the results of all your hard work.  You can download the VPN package which contains the configuration etc… from the Azure portal via a handy link:

image

Install it on your client PC … and once complete you should see a VPN connection available in the network area in Windows:

image

image

image

All going well you will be VPN’d into your Azure network and you should be able to ping the VMs! e.g. 10.20.1.5 is the main SP VM.

I went and created a new Web App in SharePoint on http://intranet, and the only other thing I did was add “intranet” to my hosts file on the client PC so it knew to hit 10.20.1.5 (the SP machine).

After that … boomtown! … my newly minted SharePoint site is available off my client PC via the VPN to Azure.

image

This is going to be very handy for playing around with various SharePoint farm setups in Azure with the flexibility of having them run in the cloud.  I don’t have a 32GB RAM laptop to do this “on prem” unfortunately. Azure VMs have only recently become affordable for me with the recent announcement that you don’t pay for them when they are switched off. 

Enjoy and thanks for reading.

Chris Johnson