Author Archives: Chris Johnson

About Chris Johnson

Chris is an avid developer, speaker and is the General Manager of Provoke Solutions Inc. a Microsoft Gold Partner in Seattle WA that is one of the world's most renowned and sought-after online experience consultancies. Provoke Solutions specialize in software solutions for SharePoint and the Microsoft technology stack (http://www.provokesolutions.com). In Nov 2011 Chris left Microsoft Corporation after nine and a half years where he most recently was a Senior Technical Product Manager for the SharePoint product group in Redmond Washington managing SharePoint’s professional developer audience technical marketing programs. Chris moved to Redmond in 2007 to work in the software engineering team on the SharePoint 2010 release after working for Microsoft New Zealand. In New Zealand he consulted to customers across the Asia Pacific region on designing and implementing Content Management Server and SharePoint deployments. Chris’ background is in Microsoft software development and enjoys all things technical. He is a speaker at numerous conferences around the world such as Tech.Ed, SharePoint Best Practices Conference, SharePoint Connections and the world wide SharePoint Conference. Chris holds a Bachelor of Computer Science & enjoys throwing himself out of perfectly good airplanes from time to time. Chris blogs and can be contacted via www.looselytyped.net

Using Azure Kubernetes Service (AKS) for your VSTS build agents

Sometimes hosted build agents in VSTS dont cut the mustard and you want full control over your build environment. That’s where self hosted build agents come in. The problem is … do you don’t want to run VMs ideally and if you are getting into Kubernetes then your dev cluster is probably sitting there idle 90%+ of the time with all those CPU cycles being wasted.

We decided to do something with that extra capacity and run a set of VSTS linux build agents (good for Nodejs and .net Core builds etc…) in our dev AKS cluster! We can scale them up for more concurrent builds really easily.

What you will need:

Lets go …

Helm is a tool that helps you install apps in your kubernetes environment. Helm charts are templates for your application. They define what your app needs and what containers should be deployed etc… Fortunately Microsoft make their linux build agent available as a Docker image that we can use in a helm chart to get it deployed. https://hub.docker.com/r/microsoft/vsts-agent/

This means all we need to do is deploy it (or many of them) to Kubernetes … and helm charts can help with that! We wrote a basic one to get things going.

Setup

First you will need to get our helm chart.

git clone git@github.com:Hyperfish/vsts-build.git

Next open up the values.yaml file and update the following properties:

  • VSTS_ACCOUNT – this is the name of your VSTS account e.g. “contoso”.
  • VSTS_POOL – this is the name of the agent pool you would like your agents registered in.
  • VSTS_TOKEN – this is your personal access token from VSTS that has been given at least the Agent Pools (read, manage) scope.
  • replicaCount – set this to how many agents you want deployed.

Note: for more information about these see the vsts agent docker image documentation.

Deploy

Once you have updated the values.yaml file you are ready to deploy!

Ensure you are in the /vsts-agent folder and have kubectl connected to the kubernetes cluster you want to deploy the application to. (tip: run “kubectl cluster-info” to check you are connected)

Deploy the chart:

helm install .

Once complete the agent will be started in your kubernetes cluster.

helm ls

This will show you the apps you ahve deployed and you should see the vets-agent chart deployed.

Check your VSTS build pool that you specified in the values.yaml file. You should see your new agents listed.

Troubleshooting:
If you don’t see them listed then its likely that the values you set are incorrect. You can check the logs of your agents using:

kubectl logs <pod name>

You might see something like “error: missing VSTS_ACCOUNT environment variable”

Summary

Kubernetes is a great way to deploy multiple vsts build agents! Deploying with a Helm chart is even nicer! It gives you are simple way to deploy and manage your agents in kubernetes.

Enjoy!

-CJ

Moving to Azure Kubernetes Service (AKS)

We recently moved our production service to the new Azure Kubernetes Service (AKS) from Microsoft. AKS is a managed Kubernetes (K8s) offering from Microsoft, which in this case, means Microsoft manage part of the cluster for you. With AKS you pay for your worker nodes, not your master nodes (and thus control plane) which is nice. 

Don’t know what Kubernetes is?

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. — https://kubernetes.io/

Why did we move

The short story is that we were forced to evaluate our options as the current orchestration software we were using to run our service was shutting down. Docker Cloud was a sass offering that offered capabilities around orchestration/management or our docker container based service. This meant we used it for deployment of new containers, rolling out updates, scheduling those containers on various nodes (VMs) and keeping an eye on it all. It was a very innovative offering 2 years ago when we started using it, was cloud based, easy to use and price competitive. Anyway, Docker with their new focus on making money the Enterprise decided to retire the product and we were forced to look elsewhere. 

Kubernetes was the obvious choice. It’s momentum in the industry means there are a plethora of tools, guidance, community and offerings around it. Our service was already being run in Docker containers and so didn’t require significant changes to run in Kubernetes. Our service is comprised of ~20 or so “services” e.g. frontend, API. Kubernetes helps you run these services. It offers things like scheduling those containers to run, managing spinning up new ones if they stop etc.

Every major cloud provider has a Kubernetes offering now. Googles GKE has been around since as early as Nov 2014 & Amazon’s AWS recently released their EKS (on June 2018).

Choosing AKS 

We are not a big team and we couldn’t afford to have a dedicated team to run our orchestration and management platform. We needed an offering that was run by Kubernetes experts who know the nitty gritty of running K8s. The team building AKS at Microsoft are those people. Lead by the co-founder of the Kubernetes project Brendan Burns the MS team know their stuff. It was compelling that they were looking at new approaches in the managed K8s space like not charging for the control plan/master nodes in a cluster and were set on having it just be vanilla K8s and not a weird fork with proprietary peculiarities. 

Summary of reasons (in no particular order):

  • Azure based. Our customers expect the security and trust that Microsoft offers. Plus we were already in Azure.
  • Managed offering. We didn’t want to have to run the cluster plumbing.
  • Cost. Solid price performance without master node costs.
  • Support. Backed by a team that know their stuff and offer support (more on this below).

AKS is relatively new and at the time we started considering our options for the move AKS was not a generally available service. We didn’t know when it would GA either.  This pained me to say the least, but we had a hunch it was coming soon. To mitigate this we investigated acs-engine which is a tool that AKS uses behind the scenes to generate ARM templates for Azure to stand up a K8s cluster. Our backup plan was to run our own K8s cluster for a while until AKS went GA. Fortunately we didn’t need to do this 🙂

Moving to Kubernetes

We were in the fortunate position that our service was already running in a set of Docker containers.  Tweaking these to run with Kubernetes only required minor changes. Those were all focused on supplying configuration to the containers. We used a naming convention for environment variables that wasn’t Kubernetes friendly, so we needed to tweak the way we read that configuration in our containers.

The major work required was in defining the “structure” of our application in Kubernetes configuration files. These files define the components in your application, the configuration of them, how they can be communicated with & resources they need. These files are just YAML files however manually building them can be tedious when you have quite a few to do. Also there can be repetition between them and keeping them in sync can be painful.

This is where Helm comes in.

Helm is the “package manager for Kubernetes” … but I prefer to think of it as a tool that helps you build templates (“charts”) of your application. 

In Azure speak they are like ARM templates for your application definition.

The great part about Helm is that it separates the definition of your application and the environmental specifics. For example, you build a chart for your application that might contain a definition for your frontend app and an API, what resources they need and the configuration they get e.g. DB connection string. But you don’t have to bake the connection string into your chart. That is supplied in an environment specific values file. This lets you define your application and then create environment specific values files for each environment you will deploy your application to e.g. test, stage, production etc. You can manage your chart in source control etc. and manage your environment specific values files (with secrets etc.) outside of source control in a safe location.

This means we can deploy our service on a developer laptop, AKS cluster in Azure for test or into Production using the exact same definition, but with different environment specific configuration.

Chart + Environment config == Full definition.

We already had a Docker Compose definition of our service that our engineers used to run the stack locally during development. We took this and translated it into a Helm chart. This took a bit of work as we were learning along the way, but the result is excellent. One definition of our service in a declarative, modular and configurable way that we use across development, test environments and production.

Helm charts are already available for loads of different applications too. This means if you want to run apps like redis, postgres or zookeeper you don’t have to build your own helm charts.

That’s a lot of words … what’s the pay off Chris?

The best way I can demonstrate in a blog post the value all this brings is to show you how simple it is to deploy our application.

Here are the CLI steps for deploying a brand new, fully functional 4 node environment in Azure with AKS + Helm for our application

az aks create –resource-group myCluster –name myCluster –admin-username cjadmin –dns-name-prefix appName –node-count 4 

helm upgrade myApp . -f values.yaml -f values.dev.yaml –install

Two commands to a fully functional environment, running on a 4 node K8s cluster in Azure. Not bad at all!! It takes us about 10 mins to spin up depending on how fast Azure is at provisioning nodes.

What didn’t go well

Of course there were things that didn’t go perfectly along with all this too.  Not specifically AKS related, but Azure related. One in particular that really pissed me off 🙂 During the production move we needed to move some static IP addresses around in our production environment. We started the move and it seemingly failed part the way through the move. This left our resource group in Azure locked for ~4 hours!! During this time we couldn’t update, edit or add anything to our resource group. We logged a Severity A support ticket with MS which is supposed to have a 1 hour response time … over 3 hours later we were still waiting. We couldn’t wait and needed to take mitigation steps which included spinning up a totally new and separate environment (easy with AKS!) and doing some hacky things with VMs and HAProxy to get traffic proxied correctly to it. Some of our customers whitelist our static IP addresses in their firewalls so we don’t have the luxury of a simple DNS change to point at a new environment. It was disappointing to say the least that we pay for a support contract from MS but Azure failed and more importantly our support with MS failed and left us high and dry for 4 hours.  PS: they still don’t know what happened, but are investigating it.

Summary

Docker closing it’s Docker Cloud offering was the motivation we needed to evaluate Kubernetes and other services. It left me with a bad taste in my mouth with Docker as a company and I will find it hard to recommend or trust taking a dependency on a product offering from them. Deprecating a SaaS product with 2 months notice is not the right way to operate a business if you are interested in keeping customers IMHO. But nevertheless a good thing for us ultimately!

Our experience moving to AKS has been nothing short of excellent. We are very glad the timing of AKS worked in our favor and that Microsoft have a great offering that meets our needs. It’s still early days with AKS and time will be the ultimate proof, however as of today we are very happy with the move.

If you are new to container based applications and are from a Microsoft development background I recommend checking out a short tutorial on ASP.Net + Docker. I have thoroughly enjoyed building a SaaS service that serves millions of users in a container based model and think many of the benefits it offers are worth considering for new projects.

If you want to learn Kubernetes in Azure I recommend their getting started tutorial. It’s will give you a basic understanding of K8s and how AKS works.

Try out the tutorial on AKS + Helm for deploying applications to get started on your journey to loving Helm.

Finally … I interviewed Gabe Monroy from the AKS team when I was at Build 2018 for the Microsoft Cloud show if you are interested in hearing more about AKS, the team behind it and Microsoft’s motivations for building it!

-CJ

SharePoint Conference North America thoughts and slide links

What a fun few days hanging out with friends and collegues in Las Vegas for the last few days. SPC NA seemed to go pretty well! I think the biggest thing for me this week was being surrounded by people who all share a common passion. That is what SPCs in the past were great at and i think some of that was rekindled this week. As always wandering the expo floor and talking with other vendors about what they are building is my favorite thing about conferences and this week was great for that. Some new faces popped up too!

For those looking for my slides from the session i did you can find them below. Unfortunatly there is a lot of content in the demos that isnt captured in the slides, but i hope it helps.

Office 365 development Slack

imageMany years back a small group of friends started the Office 365 developer slack network.  A bunch of people joined and then it rapidly went nowhere Smile

I think that is a crying shame. 

I recently joined a Slack network for Kubernetes and its a fantastic resource for asking people questions, working with others on issues you are having and generally learning and finding things.

Office 365 development has changed a lot over the years and people are finding new ways of doing things all the time. The Microsoft Graph is taking off in a HUGE way, SPfx is becoming a viable way to build on the SharePoint platform, Azure AD is the center of everything in the cloud for MS and the ecosystem is heating up with amazing new companies and products springing up.

Anyway … I miss having a Slack network for Office 365 development chats with people.

You can join the O365 Dev Slack here if you feel the same way …

http://officedevslack.azurewebsites.net/

-CJ

Domain Controller in Azure VM with expired password

Came across an interesting situation this morning and thought I would drop the solution I found here incase anyone else needs to figure this out.

Situation:

  • Active Driectory Domain Controller in an Azure VM
  • Your admin account has an expired password
  • RDP’ing to the machine says your password is expired and you need to set a new one, but it keeps prompting you around in the circle that you need to udpate it … but you can’t. 

The first thing you will likely try is the Reset Password option in the Azure portal. It doesnt work for Domain Controllers (this changed recently … no idea why). You get an error message that says:

VMAccess Extension does not support Domain Controller

At this point you start trying to figure out if there is another admin account you can use to log in with. In my case this as a dev/test AD box and it only had the one admin account on it.

Solution:

Before you go and delete the VM and build up a new one I found an interesting way to fix this.

You can use the azure portal and a VM extension to upload and run a script on the machine to reset the password for you. Here is how you do it.

  • Create a script called “ResetPassword.ps1”
  • Add one line to that script

net user <YouAdminUserName> <YourNewPassword>

  • Go to the VM in the azure portal
  • Go into the extensions menu for that VM
  • In the top mentu pick “Add”
  • Choose the Custom Script extension

 

  • Click Create
  • Pick your ResetPassword.ps1 script file 
  • Ok

Wait for the extension to be deployed and run. After a while you will see the status that looks something like this:

You should be set to RDP into your machine again with the new password you set in the script file.

I have no idea why the reset password functionality in Azure decided to exclude AD DCs … but if you get stuck i hope this helps.

-CJ

Importing and Exporting photos from Office 365 and Active Directory

Photos in Office 365 are a pain. This is because there are currently three(!) main places photos are stored in Office 365:

  • SharePoint Online (SPO
  • Exchange Online (EXO)
  • Azure Active Directory (AAD)

If you are syncing profiles using AD Connect those profiles are being synced into AAD. From there they take a rather arduous and rocky path to EXO and SPO via a number of intermediatary steps that may not work or may only complete after a period of time. e.g. from EXO to SPO photos may only sync if someones photo has not been set directly in SPO and a rather profile property attribute is not set correctly.

Needless to say it can be very hard to work out why you have one photo in AD on-prem, one in AAD, one in EXO and another in SPO … all potentially different!

At Hyperfish we built our product to push photos to all the places they should go when someone updates them. This means that when someone updates their photo and that update is appoved that it will be resized appropriately for each location and then saved into AD, EXO and SPO immediately. This results in the persons photo being the same everywhere at the same time. Happy place!

However, we have some customers who want to bulk move/copy photos around between these different systems to get things back in order all at once. To do this we created a helpful wee commandline utitility that we have published the source code for on GitHub and a precompiled release if you dont want to compile it yourself.

Photo Importer Exporter

Creative naming huh! 🙂

The Photo Importer Exporter is a very basic Windows command line utility that lets you:

  • Export photos from SharePoint Online
  • Export photos from Exchange Online
  • Import photos to Active Directory
  • Import photos to SharePoint Online
  • Import phtoso to Exchange Online

All you do is Export the photos from, say, Exchange Online (the highest resolution location) and it will download them all to the /photos directory. Then you can Import them to, say, Active Directory and the utility will resize them appropriately and save them to AD for you. Simple huh.

Code: https://github.com/Hyperfish/PhotoImporterExporter
Releases: https://github.com/Hyperfish/PhotoImporterExporter/releases

It’s a work in progress and we will be adding bits to it as we come accross other scenarios we want to support.

But for now you can grab the code and take a look, suggest additions, make pull requests if you like, log issues … or just use it 🙂

Happy coding,
-CJ

Tracking planes with Raspberry Pi and Docker

When you use a flight tracking app on your phone to see where a flight is it’s very possible that location data has been crowd sourced. Pretty cool!

Sites like FlightAware.com and FlightRadar24.com use feeds of data from people around the world to help build their datasets. Participating in those feeds is open to anyone who has some basic equipment. This works by listening to the ADS-B and Mode-S signals transmitted by aircraft.  These signals identify aircraft and in some cases include positional data. It’s very easy to listen for these signals using a 1090Mhz antenna and an ADS-B receiver. A couple of years ago I bought some equipment on Amazon, hooked up the software running on a Raspberry Pi and got started feeding it to FlightAware.com. 

However, recently I stepped things up a notch with a new better antenna and dockerizing my setup. More on that in a moment, but first …

Getting started – for those who want to try this themselves

If you want to try this out yourself you will need some basic equipment:

  • Raspberry Pi
  • ADS-B USB stick
  • Antenna

You can buy kits with everything you need from FlightAware on Amazon with everything included.

Once you have your equipment the best place to start is with PiAware – FlightAware’s Raspberry Pi pre-configured software. It walks you through everything needed to get you up and running and feeding their network with your juicy tracking data.

You should  be up and feeding the network in an hour or two:

image

The “good” with the PiAware guide is it’s the simplest build process, the “bad” is that it’s specific to flight aware and doesn’t set you up to feed other providers.

“Need more input!” – Short Circuit (1986)

Eventually you might find yourself wanting more “range”.  The small indoor antenna might let you track aircraft 30mi/50kms away, depending on the terrain around you, line of sight and trees etc…

When you “need more input” you will need a better antenna. This may possibly require some WAF (“wife acceptance factor”) (or HAF, husband acceptance factor) … as it will likely require putting something outdoors and, for best results, on your roof.

I recently upgraded to the FlightAware made outdoor antenna.

Antenna1

My situation called for mounting it externally on the roof along a gutter line.  Ideally I would have mounted it on a peak of the roof, but I didn’t feel comfortable drilling holes in my roof, so opted for a mount that let me hang it from under the eaves.

image

Satellite Under Eave Mount 1 5/8

Here is what the setup looks like mounted.

Antenna2

The new outdoor setup and better antenna really bumped up my coverage.  Even without optimal mounting (as you can see there are trees on the south side of our house) range went from < 50mi to ~150-200mi in some directions.

image

Dockerizing all the things

I like Docker containers. They make my life simple for running different apps and services on one box and it seemed to make sense to me that you should be able to run the piaware software and dump1090 software in containers instead of on the Raspberry Pi directly.

I came across an article “Get eyes in the sky with your Raspberry Pi” by Alex Ellis who had done just that! In Alex’s setup however the configuration of the containers is baked into the docker images at build time which isn’t ideal.  I made some improvements like moving all configuration to Environment variables and added Docker-Compose support.

You can find the code and instructions here: https://github.com/LoungeFlyZ/eyes-in-the-sky

image

With everything in Docker containers it was relatively simple to add a feeder to another tracking site FlightRadar24.com.  They also provide software “fr24feed” that takes a feed from dump1090 and processes/uploads it. You can find optional instructions in the ReadMe file on how to add this pretty simply.

Summary

I love this stuff.  It’s a fun project with hardware and software aspects to it.  Hanging out of a second story window being held by my wife around the waist was a “hilarious” exercise that I suggest every marriage attempts at some point.

I still have some re-wiring to do in the attic to secure the wiring a bit more, and possibly add some more feeders to feed other tracking sites before I’m complete with the project too. 

Going forward I’m not sure what is next for this project yet.  I’m sure there is more to be done and that I’ll likely be mounting more hardware the roof at some point! LOL.

I hope you can enjoy the frivolity of a project like this as much as I do!

-CJ

Sniffing Azure Storage Explorer traffic

A friend asked a question about looking at how Azure Storage Explorer makes its API calls to Azure using something like Fiddler.

The issue with just firing up Fiddler and watching traffic is that to decrypt HTTPS traffic fiddler installs a root certificate so that SSL is terminated in Fiddler first so that it can show you the decrypted payloads back and forth etc…

That is normally fine with apps that use the standard WinINET libraries etc… to make HTTPS calls (like chrome). However, Azure Storage Explorer an Electron app using NodeJS and doesnt use these. Node also handles root CAs a bit differently and a long story short is that it doesn’t by default trust Fiddlers Root Cert that it installs. This means that HTTPS calls fail with a “unable to verify the first certificate” error.

Setting up Fiddler

First you need to set up Fiddler to decypt HTTPS traffic. You do this in Fiddlers options under Tools > Options > HTTPS.

This will prompt you to install a certificate that Fiddler uses to terminate SSL in Fiddler so it can show you the decrypted traffic.

One You have completed this you need to export the certificate Fiddler installed so that you can set up Storage Explorer with it.

  1. Run MMC.exe
  2. File > Add Remove snap in
  3. Pick Certificates, when prompted choose “Computer account” and “Local computer”
  4. Navigate to Certificates > Trusted Root Certificates > Certificates
  5. Find “DO_NOT_TRUST_FiddlerRoot” certificate
  6. Right Click > All Tasks > Export  
  7. As you go through the wizard choose “Base-64 encoded X.509 (.CER)” for the file format
  8. Save it your desktop or somewhere you will be able to find it later


Setting up Azure Storage Explorer

First up you need to configure Azure Storage Explorer to use Fiddler as a proxy. This is pretty straightforward.

In Storage Explorer go to the Edit -> Configure Proxy menu and add 127.0.0.1 and 8888 (fiddler defaults). Note: Not authentication should be used.

Now Storage Explorer will use Fiddler … however, you will start getting “unable to verify the first certificate” errors as Storage Explorer still doesnt trust the root certificate that fiddler is using for SSL termination.

To add the Fiddler certificate go to the Edit > SSL Certificates > Import Certificates. Pick the .cer file you saved out earlier. Storage Explorer will prompt you to restart in order for these to take effect.

Now when you start using Storage Explorer you should start seeing its traffic in Fiddler and in a readable decrypted state like below.

Now you can navigate around and do various operations and see what and how Azure Storage Explorer is doing it.

Happy Coding.
-CJ

Making perfectly clear ice

Call me weird … but i like clear ice in my drinks for some odd reason. So I went on a mission to learn how to make it at home.

Note: This is amost totally pointless. It tastes the same, is just as cold (i think)… but it looks sweet!

Here is how I do it:

-= Making the Ice =-
1. Boil lots of water
2. Let it cool to room temperature (dont skip this step)
3. Boil it again
4. Let it cool to room temperature (yes, again)
5. Fill a small cooler with the water. Leave about 10cm (4″) of room at the top
6. Take the lid off the cooler or leave the lid open
7. Put the cooler in your freezer
8. Leave it for 24 hours
9. Take the cooler out of the freezer and it will be frozen on the top 8cm (3″) or so
10. Slide a knife down the side of the block and ease it around the block. This will let in air and release the block
11. Take the block out carefully and use a knife to remove sharp bits

-= Cutting it/Shaping it=-
Note: I have not mastered this bit yet

1. get a saw or knife and score the block where you want to cut it.
2. Place knife on score line and knock/hit it with a rubber mallet
3. Hopefully it will break cleanly
4. repeat until you have the rough blocks you want
5. Use a hot fry pan to shape the blocks and make perfect sides of the block

Bask in the glory of your perfectly clear ice.

-CJ

RunAsRadio: Keeping Active Directory Data Up to Date with Chris Johnson

I recently sat down with Richard Campbell on the RunAsRadio podcast to talk about the state of directories, why people profile data is a critical component of SharePoint and Office 365 deployments and how Hyperfish can help organizations with their profile and directory mess.

Listen here: http://www.runasradio.com/Shows/Show/508