Author Archives: Chris Johnson

About Chris Johnson

Chris is an avid developer, speaker and is the General Manager of Provoke Solutions Inc. a Microsoft Gold Partner in Seattle WA that is one of the world's most renowned and sought-after online experience consultancies. Provoke Solutions specialize in software solutions for SharePoint and the Microsoft technology stack (http://www.provokesolutions.com). In Nov 2011 Chris left Microsoft Corporation after nine and a half years where he most recently was a Senior Technical Product Manager for the SharePoint product group in Redmond Washington managing SharePoint’s professional developer audience technical marketing programs. Chris moved to Redmond in 2007 to work in the software engineering team on the SharePoint 2010 release after working for Microsoft New Zealand. In New Zealand he consulted to customers across the Asia Pacific region on designing and implementing Content Management Server and SharePoint deployments. Chris’ background is in Microsoft software development and enjoys all things technical. He is a speaker at numerous conferences around the world such as Tech.Ed, SharePoint Best Practices Conference, SharePoint Connections and the world wide SharePoint Conference. Chris holds a Bachelor of Computer Science & enjoys throwing himself out of perfectly good airplanes from time to time. Chris blogs and can be contacted via www.looselytyped.net

Publisher verification of Azure AD apps

At the Microsoft Build conference this year the Azure AD team announced an interesting new capability called Publisher verification. The goal of this is to let you know, as someone about to give an application permissions to some or all of your data, that the application is from who it says it’s from.

Consenting to give an application permissions to your data is an important acknowledgement that you trust that application with your data. You want to be really sure you are saying “yes!” to the right company.

So … What does this look like?

Publisher verification means that when you are consenting to an application you will see a blue check/tick stamp next to the publisher name like this:

verifiedverified2

This means that when consenting to an application you can be sure the app you are consenting to is from the publisher that you expect. It’s an extra level of confidence that you didn’t get phished into consenting to a rouge app that is about to steal all your data and do nefarious things with it … well … you trust that the publisher won’t since you trust them right? 🙂  More on this in a moment.

How does this work?

Microsoft is attempting to help customers consent to app publishers they trust by making the app publishers jump through a few hoops to prove they are who they say they are. Most app stores do this like Apple by doing a business verification check for example.

App developers must associate their application registration to their Microsoft Partner Network (MPN) account. The app developer pops their MPN id into the app registration and MS then verifies the following:

  1. The verified publisher domain of the app registration in the Azure AD matches a verified domain in your MPN account.
  2. The account you are logged in as is an authorized user in the MPN account

The publisher domain requirement, #1 above, could be a little tricky for some developers to meet. You will likely need to find whoever manages your MPN account and work with them on ensuring you have a DNS verified domain verified in both MPN and your Azure AD where you register the app.  A lot of developers register apps in a secondary Azure AD tenant away from their companies primary tenant so this could take a bit of planning to connect up a verified domain in AAD and MPN to make it work.

How do you set it up?

You set this up in Azure AD on the app registration blade for your application.  Go into the Branding tab for your app and you should see the Publisher Verification section towards the bottom. Simply drop in you MPN id.  If all the checks pan out (see above) then your application will be marked as verified like so:

image

Where is all this going?

As Microsoft holds more and more of organizations data it’s incredibly important they help customers protect it. Ensuring customers are aware of who they are giving access to their data is an important first step in that. That said, it’s certainly not going to stop an app developer from taking all your data if they break that trust with you, but it can help with ensuring you are saying yes to the right app to start with.

In addition to the announcement about publisher verification, Microsoft said they will allow customers to enforce policies that ensure users can only consent to apps from verified publishers.  This would lower the likelihood of a user being tricked into consenting to an app that a malicious actor set up to siphon data etc… They would need to have jumped though more hoops at least to verify their app first.

I believe we will see this taken further in the future in a few ways:

  1. Publisher verified app consent only on by default. (see update note below)
  2. The ability for admins to set policy that allows consent only to a set list of verified publishers

Microsoft holds the keys to many organizations valuable data.  They really want to avoid a Cambridge Analytica situation and are taking the first steps necessary to build more trust into their APIs and apps that connect to your data.

I think this is a great move by Microsoft.  It might seem like a small one currently, but hopefully we will see more advances in the future that will help us all protect our data.

You can read more about how to set this up with your applications here: Publisher verification (preview)

UPDATE: Publisher Verification is now GA.  Also you must have publisher verification done for your new multi-tenant apps you register after Nov 8th 2020 or users will not be able to consent to them. 

-CJ

Measuring cloud market share. A pet project.

For a while I have had a desire to measure who is winning market share in the cloud. Microsoft vs. Google vs. Amazon vs. Others. I finally got some time over a few weekends to build an app to help me do that.

Here it is:
https://cloudmarketshare.com/

Underwhelming huh!

It’s pretty early days 🙂 I am no web developer that is clear! LOL. But the idea was to get something basic working and then go from there.

My goal is simple. Add data, detection techniques and interesting ways to visualize/slice and dice the data so that i can keep an eye on movements in market share. For now that just means some basic pie charts. But longer term I hope to add graphs showing historical data and trends that show movements over time. It’s currently keeping all that data but not showing it (my frontend web skills are weak).

In time I would like to add:

  • Authentication provider market share (e.g. Azure AD vs. Okta)
  • Top 1 million domain market share
  • Domain lookup with historical data
  • Productivity suite market share (e.g. O365 vs. GSuite)
  • Meeting solution market share (e.g. Teams vs. WebEx)
  • Information on other cloud software a company uses (e.g. Docusign vs. Adobe sign)
  • More … 😉

I have a whole raft of ideas, but time is really the limiting factor for implementing things.

How it works. I give it lists of domains that i want it to analyze and it goes of and analyzes them in a number of ways using DNS, IP addresses and a ranger of other detection techniques. It stores the results and does a bunch of stats aggregation. The app just shows this aggregated stats data.

It all runs in Microsoft Azure (of course) and consists of .Net core containers running in Azure Kubernetes Service and Azure Table Storage for keeping the data. I took inspiration from Troy Hunt and his post: Working with 154 million records on Azure Table Storage – the story of “Have I been pwned?” and tried to keep it cheap to run, fast and easy to maintain 🙂 It’s a side interest project after all.

I’m keen to hear what you would like to see from this. Follow and tweet @CloudMktShare with your ideas and suggestions.

-CJ

KubeCon 2018 Seattle and Azure

KubeCon 2018 was in Seattle this week and I attended as media for the Microsoft Cloud Show. KubeCon is a conference dedicated to the Kubernetes, it’s community and it’s thriving ecosystem of partners and vendors. This year saw 8,000 people attending so it definitely isn’t some small time event.

sVthZc7wRaq2gzrmFdz%KA

 

I was really interested in this event for a few reasons, not least of which was seeing what Microsoft was up to there. Like other events my primary interest was walking the expo floor and seeing what vendors and partners were there and what they were doing. It’s a great way to measure the pulse of an ecosystem. You can see who is investing in an ecosystem, get a feel for the excitement, see who is no longer participating and talk to people to get a sense of what is new and interesting.

Microsoft have been investing in Kubernetes for a while now and their presence at this conference was no exception. It was actually pretty incredible to see their booth buzzing at the show with plenty of people at it asking questions and seeing what Azure was all about. I commented on twitter that it almost brought a tear to my eye seeing the Microsoft booth that busy at a confernce about a technology that could certainly historically be classified “non-microsoft” developer friendly. Seeing Microsoft stand alongside AWS and Google Cloud with a solid offering and being taken seriously was awesome!

JhiXW%MORM6I7bqlcEorWA

Microsoft have been doing really innovative things in the containers space recently, for example the public preview of the virtual kubelet. Virtual Kubelet brings the world of serverless together with kubernetes. You can take advantage of Azure Container Instances with Kubernetes so that you don’t need to worry about compute capacity on your worker nodes. Run as many containers as you like and Microsoft will take care of the compute behind the scenes. It also works for other serverless container platforms like AWS Fargate too.

I checked out some interesting vendors in the expo like Atomist who make a platform that helps you build the software delivery process you want. Sysdig that does interesting things around monitoring and securing containers. And Rookout that lets you debug running applications in Kubernetes. It was really interesting to see these vendors providing similar solutions to offerings in the MS dev space that we have grown used to over the years with VS and Azure over the years. It really makes you realize how behind the rest of the industry are when it comes to developer tooling. MS is of course catching up in the Kubernetes space, but .Net tooling is incredible vs. whats available for non-MS developers.

Finally, something i really noticed was how different a conference that was not run by a big vendor. like Microsoft, was. It had a very different feel having the big players as just partners and not running the show. It really rippled across everything from session content not being massaged by the big corporate organizer, the keynotes not being all about one vendor and through to things like child care on site that just seemed so logical.
It was a really great show and I hope I get the opportunity to go again!

IMG_1974

A simple joy of a managed AKS offering in Azure

A while back we moved to Azure Kubernetes Service for running the Hyperfish service. One of the advertised benefits we liked about AKS was that it was a managed service and that Microsoft would help us keep it in good working order. Late last week the value of this really hit home when I saw the following headline:

Kubernetes’ first major security hole discovered

It’s fair to say this freaked me out (significantly) and I immediately started to look into what we needed to do in order to secure our environments ASAP.

I went digging on twitter and found this very helpful gem from Gabe Monroy:

What a relief! I’m guessing that having people on the team who not only build and run AKS but also work on the Kubernetes project itself meant that Microsoft got the heads up about this vulnrability well before the CVE was published.

This is a fantastic example of why a managed service can help you running your applications with less manual effort. That said a managed service comes with a set of tradeoffs usually around flexilbity and control and so your particular requirments will dictate if you are able to take advantage of one.

Microsoft Graph with Postman tricks and tips

Postman is a popular tool for crafting up and making HTTP requests. It makes calling REST/JSON APIs like the Microsoft Graph etc… much easier. Over the years I learnt a couple of tricks that make using postman and the graph much easier that a couple of people have asked me about after seeing them in demos. So here goes.

#1. Use variables

The first thing you need to do before calling an API like MS Graph is to Authenticate. This involves app ids, secrets, tokens and other magic strings. Rather than pasting these into your requests you can set up an environment in postman that contains variables that define all these.

In an environment you can define variables like your appId, secret, tenant name etc…

Then in your requests you can use those variables, rather than copying them in, like in this call to get an access token for app-only (aka: client credential flow) calls:

Notice in the places where I need to insert these variables you use the {{ variable_name }} syntax.

#2. Automatically cache tokens

In the call made above to get an app-only access token for the graph the response payload would look like this:

Normally you would need to copy that access token out and save it into a variable for use in other calls. However, using postmans ability to run “tests” after responses come back you can run a bit of javascript that saves the content of the token into a variable automatically.

Here is the javascript:

This will run after your request is made and grabs the access token from the response and saves it in the appOnlyAccessToken variable.

Then you can make other graph calls like this one to get all users in a tenant using that variable.

Postman has built in support for helping with authentication such as OAuth etc… however i have never found it to be particularly reliable.  Also you can use this same technique for other variables like a users id etc…

Hope someone else finds this useful!

-CJ

Building a start-up on Azure with CI/CD, containers, and Kubernetes, without the explosive ops overhead

I had a great time speaking at Microsoft’s Ignite conference last week!  It was fun talking a bit about our startup journey at Hyperfish and some of the decisions we made along the way about running and supporting our service.

This was a totally new topic for me and something a bit outside my normal speaking topics … but I really enjoy sharing our journey with others!

You can check out the session recording here:

Using Azure Kubernetes Service (AKS) for your VSTS build agents

Sometimes hosted build agents in VSTS dont cut the mustard and you want full control over your build environment. That’s where self hosted build agents come in. The problem is … do you don’t want to run VMs ideally and if you are getting into Kubernetes then your dev cluster is probably sitting there idle 90%+ of the time with all those CPU cycles being wasted.

We decided to do something with that extra capacity and run a set of VSTS linux build agents (good for Nodejs and .net Core builds etc…) in our dev AKS cluster! We can scale them up for more concurrent builds really easily.

What you will need:

Lets go …

Helm is a tool that helps you install apps in your kubernetes environment. Helm charts are templates for your application. They define what your app needs and what containers should be deployed etc… Fortunately Microsoft make their linux build agent available as a Docker image that we can use in a helm chart to get it deployed. https://hub.docker.com/r/microsoft/vsts-agent/

This means all we need to do is deploy it (or many of them) to Kubernetes … and helm charts can help with that! We wrote a basic one to get things going.

Setup

First you will need to get our helm chart.

git clone [email protected]:Hyperfish/vsts-build.git

Next open up the values.yaml file and update the following properties:

  • VSTS_ACCOUNT – this is the name of your VSTS account e.g. “contoso”.
  • VSTS_POOL – this is the name of the agent pool you would like your agents registered in.
  • VSTS_TOKEN – this is your personal access token from VSTS that has been given at least the Agent Pools (read, manage) scope.
  • replicaCount – set this to how many agents you want deployed.

Note: for more information about these see the vsts agent docker image documentation.

Deploy

Once you have updated the values.yaml file you are ready to deploy!

Ensure you are in the /vsts-agent folder and have kubectl connected to the kubernetes cluster you want to deploy the application to. (tip: run “kubectl cluster-info” to check you are connected)

Deploy the chart:

helm install .

Once complete the agent will be started in your kubernetes cluster.

helm ls

This will show you the apps you ahve deployed and you should see the vets-agent chart deployed.

Check your VSTS build pool that you specified in the values.yaml file. You should see your new agents listed.

Troubleshooting:
If you don’t see them listed then its likely that the values you set are incorrect. You can check the logs of your agents using:

kubectl logs <pod name>

You might see something like “error: missing VSTS_ACCOUNT environment variable”

Summary

Kubernetes is a great way to deploy multiple vsts build agents! Deploying with a Helm chart is even nicer! It gives you are simple way to deploy and manage your agents in kubernetes.

Enjoy!

-CJ

Moving to Azure Kubernetes Service (AKS)

We recently moved our production service to the new Azure Kubernetes Service (AKS) from Microsoft. AKS is a managed Kubernetes (K8s) offering from Microsoft, which in this case, means Microsoft manage part of the cluster for you. With AKS you pay for your worker nodes, not your master nodes (and thus control plane) which is nice. 

Don’t know what Kubernetes is?

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. — https://kubernetes.io/

Why did we move

The short story is that we were forced to evaluate our options as the current orchestration software we were using to run our service was shutting down. Docker Cloud was a sass offering that offered capabilities around orchestration/management or our docker container based service. This meant we used it for deployment of new containers, rolling out updates, scheduling those containers on various nodes (VMs) and keeping an eye on it all. It was a very innovative offering 2 years ago when we started using it, was cloud based, easy to use and price competitive. Anyway, Docker with their new focus on making money the Enterprise decided to retire the product and we were forced to look elsewhere. 

Kubernetes was the obvious choice. It’s momentum in the industry means there are a plethora of tools, guidance, community and offerings around it. Our service was already being run in Docker containers and so didn’t require significant changes to run in Kubernetes. Our service is comprised of ~20 or so “services” e.g. frontend, API. Kubernetes helps you run these services. It offers things like scheduling those containers to run, managing spinning up new ones if they stop etc.

Every major cloud provider has a Kubernetes offering now. Googles GKE has been around since as early as Nov 2014 & Amazon’s AWS recently released their EKS (on June 2018).

Choosing AKS 

We are not a big team and we couldn’t afford to have a dedicated team to run our orchestration and management platform. We needed an offering that was run by Kubernetes experts who know the nitty gritty of running K8s. The team building AKS at Microsoft are those people. Lead by the co-founder of the Kubernetes project Brendan Burns the MS team know their stuff. It was compelling that they were looking at new approaches in the managed K8s space like not charging for the control plan/master nodes in a cluster and were set on having it just be vanilla K8s and not a weird fork with proprietary peculiarities. 

Summary of reasons (in no particular order):

  • Azure based. Our customers expect the security and trust that Microsoft offers. Plus we were already in Azure.
  • Managed offering. We didn’t want to have to run the cluster plumbing.
  • Cost. Solid price performance without master node costs.
  • Support. Backed by a team that know their stuff and offer support (more on this below).

AKS is relatively new and at the time we started considering our options for the move AKS was not a generally available service. We didn’t know when it would GA either.  This pained me to say the least, but we had a hunch it was coming soon. To mitigate this we investigated acs-engine which is a tool that AKS uses behind the scenes to generate ARM templates for Azure to stand up a K8s cluster. Our backup plan was to run our own K8s cluster for a while until AKS went GA. Fortunately we didn’t need to do this 🙂

Moving to Kubernetes

We were in the fortunate position that our service was already running in a set of Docker containers.  Tweaking these to run with Kubernetes only required minor changes. Those were all focused on supplying configuration to the containers. We used a naming convention for environment variables that wasn’t Kubernetes friendly, so we needed to tweak the way we read that configuration in our containers.

The major work required was in defining the “structure” of our application in Kubernetes configuration files. These files define the components in your application, the configuration of them, how they can be communicated with & resources they need. These files are just YAML files however manually building them can be tedious when you have quite a few to do. Also there can be repetition between them and keeping them in sync can be painful.

This is where Helm comes in.

Helm is the “package manager for Kubernetes” … but I prefer to think of it as a tool that helps you build templates (“charts”) of your application. 

In Azure speak they are like ARM templates for your application definition.

The great part about Helm is that it separates the definition of your application and the environmental specifics. For example, you build a chart for your application that might contain a definition for your frontend app and an API, what resources they need and the configuration they get e.g. DB connection string. But you don’t have to bake the connection string into your chart. That is supplied in an environment specific values file. This lets you define your application and then create environment specific values files for each environment you will deploy your application to e.g. test, stage, production etc. You can manage your chart in source control etc. and manage your environment specific values files (with secrets etc.) outside of source control in a safe location.

This means we can deploy our service on a developer laptop, AKS cluster in Azure for test or into Production using the exact same definition, but with different environment specific configuration.

Chart + Environment config == Full definition.

We already had a Docker Compose definition of our service that our engineers used to run the stack locally during development. We took this and translated it into a Helm chart. This took a bit of work as we were learning along the way, but the result is excellent. One definition of our service in a declarative, modular and configurable way that we use across development, test environments and production.

Helm charts are already available for loads of different applications too. This means if you want to run apps like redis, postgres or zookeeper you don’t have to build your own helm charts.

That’s a lot of words … what’s the pay off Chris?

The best way I can demonstrate in a blog post the value all this brings is to show you how simple it is to deploy our application.

Here are the CLI steps for deploying a brand new, fully functional 4 node environment in Azure with AKS + Helm for our application

az aks create –resource-group myCluster –name myCluster –admin-username cjadmin –dns-name-prefix appName –node-count 4 

helm upgrade myApp . -f values.yaml -f values.dev.yaml –install

Two commands to a fully functional environment, running on a 4 node K8s cluster in Azure. Not bad at all!! It takes us about 10 mins to spin up depending on how fast Azure is at provisioning nodes.

What didn’t go well

Of course there were things that didn’t go perfectly along with all this too.  Not specifically AKS related, but Azure related. One in particular that really pissed me off 🙂 During the production move we needed to move some static IP addresses around in our production environment. We started the move and it seemingly failed part the way through the move. This left our resource group in Azure locked for ~4 hours!! During this time we couldn’t update, edit or add anything to our resource group. We logged a Severity A support ticket with MS which is supposed to have a 1 hour response time … over 3 hours later we were still waiting. We couldn’t wait and needed to take mitigation steps which included spinning up a totally new and separate environment (easy with AKS!) and doing some hacky things with VMs and HAProxy to get traffic proxied correctly to it. Some of our customers whitelist our static IP addresses in their firewalls so we don’t have the luxury of a simple DNS change to point at a new environment. It was disappointing to say the least that we pay for a support contract from MS but Azure failed and more importantly our support with MS failed and left us high and dry for 4 hours.  PS: they still don’t know what happened, but are investigating it.

Summary

Docker closing it’s Docker Cloud offering was the motivation we needed to evaluate Kubernetes and other services. It left me with a bad taste in my mouth with Docker as a company and I will find it hard to recommend or trust taking a dependency on a product offering from them. Deprecating a SaaS product with 2 months notice is not the right way to operate a business if you are interested in keeping customers IMHO. But nevertheless a good thing for us ultimately!

Our experience moving to AKS has been nothing short of excellent. We are very glad the timing of AKS worked in our favor and that Microsoft have a great offering that meets our needs. It’s still early days with AKS and time will be the ultimate proof, however as of today we are very happy with the move.

If you are new to container based applications and are from a Microsoft development background I recommend checking out a short tutorial on ASP.Net + Docker. I have thoroughly enjoyed building a SaaS service that serves millions of users in a container based model and think many of the benefits it offers are worth considering for new projects.

If you want to learn Kubernetes in Azure I recommend their getting started tutorial. It’s will give you a basic understanding of K8s and how AKS works.

Try out the tutorial on AKS + Helm for deploying applications to get started on your journey to loving Helm.

Finally … I interviewed Gabe Monroy from the AKS team when I was at Build 2018 for the Microsoft Cloud show if you are interested in hearing more about AKS, the team behind it and Microsoft’s motivations for building it!

-CJ

SharePoint Conference North America thoughts and slide links

What a fun few days hanging out with friends and collegues in Las Vegas for the last few days. SPC NA seemed to go pretty well! I think the biggest thing for me this week was being surrounded by people who all share a common passion. That is what SPCs in the past were great at and i think some of that was rekindled this week. As always wandering the expo floor and talking with other vendors about what they are building is my favorite thing about conferences and this week was great for that. Some new faces popped up too!

For those looking for my slides from the session i did you can find them below. Unfortunatly there is a lot of content in the demos that isnt captured in the slides, but i hope it helps.

Office 365 development Slack

imageMany years back a small group of friends started the Office 365 developer slack network.  A bunch of people joined and then it rapidly went nowhere Smile

I think that is a crying shame. 

I recently joined a Slack network for Kubernetes and its a fantastic resource for asking people questions, working with others on issues you are having and generally learning and finding things.

Office 365 development has changed a lot over the years and people are finding new ways of doing things all the time. The Microsoft Graph is taking off in a HUGE way, SPfx is becoming a viable way to build on the SharePoint platform, Azure AD is the center of everything in the cloud for MS and the ecosystem is heating up with amazing new companies and products springing up.

Anyway … I miss having a Slack network for Office 365 development chats with people.

You can join the O365 Dev Slack here if you feel the same way …

http://officedevslack.azurewebsites.net/

-CJ