Category Archives: Development

Domain Controller in Azure VM with expired password

Came across an interesting situation this morning and thought I would drop the solution I found here incase anyone else needs to figure this out.

Situation:

  • Active Driectory Domain Controller in an Azure VM
  • Your admin account has an expired password
  • RDP’ing to the machine says your password is expired and you need to set a new one, but it keeps prompting you around in the circle that you need to udpate it … but you can’t. 

The first thing you will likely try is the Reset Password option in the Azure portal. It doesnt work for Domain Controllers (this changed recently … no idea why). You get an error message that says:

VMAccess Extension does not support Domain Controller

At this point you start trying to figure out if there is another admin account you can use to log in with. In my case this as a dev/test AD box and it only had the one admin account on it.

Solution:

Before you go and delete the VM and build up a new one I found an interesting way to fix this.

You can use the azure portal and a VM extension to upload and run a script on the machine to reset the password for you. Here is how you do it.

  • Create a script called “ResetPassword.ps1”
  • Add one line to that script

net user <YouAdminUserName> <YourNewPassword>

  • Go to the VM in the azure portal
  • Go into the extensions menu for that VM
  • In the top mentu pick “Add”
  • Choose the Custom Script extension

 

  • Click Create
  • Pick your ResetPassword.ps1 script file 
  • Ok

Wait for the extension to be deployed and run. After a while you will see the status that looks something like this:

You should be set to RDP into your machine again with the new password you set in the script file.

I have no idea why the reset password functionality in Azure decided to exclude AD DCs … but if you get stuck i hope this helps.

-CJ

Importing and Exporting photos from Office 365 and Active Directory

Photos in Office 365 are a pain. This is because there are currently three(!) main places photos are stored in Office 365:

  • SharePoint Online (SPO
  • Exchange Online (EXO)
  • Azure Active Directory (AAD)

If you are syncing profiles using AD Connect those profiles are being synced into AAD. From there they take a rather arduous and rocky path to EXO and SPO via a number of intermediatary steps that may not work or may only complete after a period of time. e.g. from EXO to SPO photos may only sync if someones photo has not been set directly in SPO and a rather profile property attribute is not set correctly.

Needless to say it can be very hard to work out why you have one photo in AD on-prem, one in AAD, one in EXO and another in SPO … all potentially different!

At Hyperfish we built our product to push photos to all the places they should go when someone updates them. This means that when someone updates their photo and that update is appoved that it will be resized appropriately for each location and then saved into AD, EXO and SPO immediately. This results in the persons photo being the same everywhere at the same time. Happy place!

However, we have some customers who want to bulk move/copy photos around between these different systems to get things back in order all at once. To do this we created a helpful wee commandline utitility that we have published the source code for on GitHub and a precompiled release if you dont want to compile it yourself.

Photo Importer Exporter

Creative naming huh! 🙂

The Photo Importer Exporter is a very basic Windows command line utility that lets you:

  • Export photos from SharePoint Online
  • Export photos from Exchange Online
  • Import photos to Active Directory
  • Import photos to SharePoint Online
  • Import phtoso to Exchange Online

All you do is Export the photos from, say, Exchange Online (the highest resolution location) and it will download them all to the /photos directory. Then you can Import them to, say, Active Directory and the utility will resize them appropriately and save them to AD for you. Simple huh.

Code: https://github.com/Hyperfish/PhotoImporterExporter
Releases: https://github.com/Hyperfish/PhotoImporterExporter/releases

It’s a work in progress and we will be adding bits to it as we come accross other scenarios we want to support.

But for now you can grab the code and take a look, suggest additions, make pull requests if you like, log issues … or just use it 🙂

Happy coding,
-CJ

Sniffing Azure Storage Explorer traffic

A friend asked a question about looking at how Azure Storage Explorer makes its API calls to Azure using something like Fiddler.

The issue with just firing up Fiddler and watching traffic is that to decrypt HTTPS traffic fiddler installs a root certificate so that SSL is terminated in Fiddler first so that it can show you the decrypted payloads back and forth etc…

That is normally fine with apps that use the standard WinINET libraries etc… to make HTTPS calls (like chrome). However, Azure Storage Explorer an Electron app using NodeJS and doesnt use these. Node also handles root CAs a bit differently and a long story short is that it doesn’t by default trust Fiddlers Root Cert that it installs. This means that HTTPS calls fail with a “unable to verify the first certificate” error.

Setting up Fiddler

First you need to set up Fiddler to decypt HTTPS traffic. You do this in Fiddlers options under Tools > Options > HTTPS.

This will prompt you to install a certificate that Fiddler uses to terminate SSL in Fiddler so it can show you the decrypted traffic.

One You have completed this you need to export the certificate Fiddler installed so that you can set up Storage Explorer with it.

  1. Run MMC.exe
  2. File > Add Remove snap in
  3. Pick Certificates, when prompted choose “Computer account” and “Local computer”
  4. Navigate to Certificates > Trusted Root Certificates > Certificates
  5. Find “DO_NOT_TRUST_FiddlerRoot” certificate
  6. Right Click > All Tasks > Export  
  7. As you go through the wizard choose “Base-64 encoded X.509 (.CER)” for the file format
  8. Save it your desktop or somewhere you will be able to find it later


Setting up Azure Storage Explorer

First up you need to configure Azure Storage Explorer to use Fiddler as a proxy. This is pretty straightforward.

In Storage Explorer go to the Edit -> Configure Proxy menu and add 127.0.0.1 and 8888 (fiddler defaults). Note: Not authentication should be used.

Now Storage Explorer will use Fiddler … however, you will start getting “unable to verify the first certificate” errors as Storage Explorer still doesnt trust the root certificate that fiddler is using for SSL termination.

To add the Fiddler certificate go to the Edit > SSL Certificates > Import Certificates. Pick the .cer file you saved out earlier. Storage Explorer will prompt you to restart in order for these to take effect.

Now when you start using Storage Explorer you should start seeing its traffic in Fiddler and in a readable decrypted state like below.

Now you can navigate around and do various operations and see what and how Azure Storage Explorer is doing it.

Happy Coding.
-CJ

Microsoft Graph spanning on-prem and online!

One of the most interesting announcements, at least in my mind, this week from Ignite 2016 was the news that Microsoft is adding support for Microsoft Graph API in hybrid deployments.

This means you can call the Microsoft Graph API like you would normally for Office 365 based mailboxes, for example, but have it actually connect with a mailbox that resides in an on-prem exchange server!

Let that sink in for a moment. That is pretty sweet!

Typically, if you have an application that is not deployed behind the firewall, i.e. inside the orgnaizations network, you couldnt call things like SharePoint or Exchange APIs without doing network gymnastics like VPN, reverse proxy or putting holes in your firewall (yuk).

Now with this hybrid support in the graph you can simply call internet based REST APIs and Microsoft is doing the work of facilitating that communication back to the on-prem resource.

Currently in Preview the graph only supports Mail, Calendar and Contacts in this hybrid configuration right now, however I can only imagine that more support for other endpoints like Users, SharePoint etc… will come at some point.

You also have to have Exchange 2016 CU3 servers deployed on-prem to get this support and sync your AD to Azure AD as authentication is managed this way.

You can read more about these re-reqs here: http://graph.microsoft.io/en-us/docs/overview/hybrid_rest_support

I think this is a huge benefit for those who are looking to build applications or cloud services that connect to data wherever that sits.

Couple of interesting scenarios to think about:

  • Mobile app that works outside the firewall was either not possible or too hard due to connectivity issues.
  • Cloud service web apps that you want to connect to on-prem data
  • Tools and Apps that can now work with data either in Office 365 or on-prem

 

This to me is a huge step in the right direction for Microsoft in their quest to make developers lives better in a hybrid world.

Hybrid is not a transitional state. For many it’s the end state. 
John Ross and Randy Drisgill

I’m looking forward to more endpoints coming online in the months ahead!

-CJ

So long Windows Phone, hello more cross plat!

My Trips user:  The My Trips Live Tile doesn’t work on Windows 10 Mobile
Me: Strange let me look into it.
MS: Yes, you are hitting a bug in Win 10 Mobile for older phone apps
Me: When are you going to fix it?
MS:  Never, please rewrite your apps to work around it.

This was a recent exchange I had regarding a problem with My Trips live tile.  It’s a painful problem for my ardent users of My Trips, many of whom have been using it since I first brought it out for Windows Phone 7 many years ago.

It’s an interesting predicament for me. I appreciate the users of My Trips, but I don’t appreciate being told I need to rewrite part of the app to use new APIs to work around this bug.  Microsoft has always been very focused on backwards compatibility and this flies in the face of that.  Sure, my app code is old.  I wrote it with C# and XAML but back when Windows Phone used a Silverlight app model … so it is not a more modern Windows universal app.

What ticks me off is I have supported the platform for years and created a reasonably popular app that filled one of the app gaps in their ecosystem. You would think that given the widening app gap that MS would bend over backwards to ensure app developers were sticking on the platform.  It seems not. It sounds like there are very few apps hitting this bug and for that reason they are not going to fix it. Although this irritates me, im sure it irritates and frustrates the users more.

For me the equation is pretty simple.  Time to update the app vs. return, and to be honest it doesn’t make sense given the time I would need to put in to change this code.

Side note for those interested on the code changes needed: It’s not a couple of lines of code to change, it’s quite a bit.  It’s changing how the app generates tiles and how it registers for push notifications from Microsoft.

But I am conflicted. Because I know there are still die hard Windows 10 mobile fans out there that will not like me for abandoning the app. That pains me.

I have wanted to update My Trips for Windows 10 (for desktops, tablets etc…) for a while and I would much rather put effort into that. Now that Xamarin is in the MS family and I could quite easily target a version for iOS and Android at the same time whilst sharing 80% of the code.

Now that Xamarin is in the MS family and I could quite easily target a version for iOS and Android at the same time whilst sharing 80% of the code.

So with all this said I am pretty sure I am leaving the My Trips mobile code alone to wither and die.  It’s been a great ride.  I have wanted to bring it to iOS for a long time, especially since I’m an iOS user these days and I have had a bunch of requests for it.  So hopefully I can get that done and keep My Trips an app with a future.

 

-CJ

Running apps using Docker Cloud (aka Tutum)

Anyone who has listen to me rant on about how interesting Docker is on the Microsoft Cloud Show may have caught me talking about Tutum.  

imageThe short story on Tutum is that it provides an easy to use management application over Virtual Machines that you want to run your apps on with Docker.  It is (sort of) cloud provider agnostic in that it supports Amazon Web Services, Microsoft Azure and Digital Ocean among others.

It was bought by Docker late last year and recently was recently re-released as Docker Cloud

What does it provide?

At a high level you still pay for your VMs wherever you host them, but Docker Cloud provides you management of them for 2c an hour (after your first free node) no matter how big or small they are.   You write your code, package it in a Docker Image as per usual and then use Docker Cloud to deploy containers based on those docker images to your Docker nodes. You can do this manually or have it triggered when you push your image to somewhere like Docker Hub as part of a continuous integration set up.

Once you have deployed your app (“Services” in Docker Cloud terminology) you can use it to monitor them, scale them, check logs, redeploy a newer version or turn them off etc…  They provide an easy to use Web App, REST APIs and a Command Line Interface (CLI).

So how easy is it really?

Getting going …

The first thing you have to do is connect to your cloud provider like Azure.  For Azure this means downloading a certificate from Docker Cloud and uploading it into your Azure subscription.  This lets Docker Cloud use the Azure APIs to manage things in your subscription for you. (details here)

Once you have done that you can start deploying Virtual Machines, “Nodes” in Docker Cloud terminology.  Below I’m creating a 2 node cluster of A2 size in the West US region of Azure. 

image

That’s it.  Click “Launch node cluster” wait a few mins (ok quite a few) and you have a functional Docker cluster up and running in Azure.

image

In Azure you can take a look at what Docker Cloud has created for you.  Note that as of the time of writing that Docker Cloud is provisioning “Classic” style VMs in Azure and not using the newer ARM model.  They also deploy different VMs into their own cloud services and resource groups which isn’t good for production.  That said, Docker Cloud let you Bring Your Own Node (BYON) which lets you provision the VMs however you like, install the Docker Cloud agent on them and then register them in Docker Cloud.  Using this you can deploy your VMs using ARM in Azure and configure them however you like.

image

Deploy stuff …

Now you have a node or two ready you can start deploying your apps to them!  Before you do this you obviously need to write your app … or use something simple like a pre-canned demo Docker Image to test things out.

Docker Cloud makes this really simple through “Services”.  You create a new service, tell it where it should pull the Docker Image from and a few other configuration options like Ports to map etc… Then Create and your containers will be deployed to your nodes.

Try this once you have your nodes up and running.  Click Services in the top nav,  then Create Service. Under Jumpstarts & Miscellaneous category you should see the “dockercloud/hello-world” image. Select it and then set it up like this:

image

There are only a two things I changed from the default setup.

  1. I moved the slider to 2 in order to deploy 2 containers
  2. Mapped Port 80 of the container to Port 80 of the node and clicked Published.  This maps port 80 of the VM to port 80 of the container running on it so that we can hit it with a web browser.
  3. High availability in the deployment strategy.  This will ensure that the containers are spread across available nodes vs. both on one.

Click “Create and deploy” and you should see your containers starting up.   Pretty simple huh!

image

Note: There is obviously a lot more available via configuration for things like environment variables and volume management for data that you will eventually need to learn about as you develop and deploy apps using Docker.

Once your containers are deployed you will see them move to the running state:

image

Now you have two hello world containers running on your nodes.  If you go back to your list of Nodes you should see 1 container running on each:

image

I want to see the good man!

You can test your hello world app out by hitting its endpoint.  You can find out what that is under the Service you created in the Endpoints information.

image

  1. This is the service endpoint.  It will use DNS round robin to direct requests between your two running containers.
  2. These are the individual endpoints for each container.  You can hit each one independently.

Try it out!  Open the URL provided in a browser and you should see something like this:

image

Note that #1 will indicate what container you are hitting.

Want more containers?  Go into your Service and move the slider and hit apply.

image

You will get an error like this:

image

This is because we mapped port 80 of the Node to the Container and you can only do that mapping once per Node/VM. i.e. two containers cant both be listening on port 80.  So unless you use a HAProxy or similar to load balance your containers you will be limited to one container on each node mapped to port 80.  I might write up another post about how to do this better using HAProxy.

Automate all the things …

We are a small company and we want to automate things as much as possible to reduce the manual effort required for mundane tasks.  We have opted to use Docker Cloud for helping us deploy containers to Azure as part of our continuous integration and continuous deployment pipeline.

In a nutshell when a developer commits code it goes through the following pipeline and automatically is deployed to our staging environment:

  1. Code is committed to GitHub
  2. Travis-CI.com is notified and it pulls the code and builds it.  Once built it creates a Docker Image and pushes it to Docker Hub.
  3. Docker Cloud is triggered by a new image.  It picks it up and redeploys that Service using the new version of the image.

This way a few minutes after a developers commits code the app has been built and deployed seamlessly into Azure for us.  We have a Big Dev Ops Flashing Thing hanging on the wall telling us how the build is going.

Cool … what else …

At Hyperfish we have been using Tutum for a while during its preview period with what I think is great success.  Sure, there have been issues during the preview, but on the whole I think it has saved us a TON of time and effort setting up and configuring docker environments.  Hell I am a developer kinda guy, not much of a infrastructure one and I managed to get it working easily which I think is really saying something 🙂

Is this how you will run production?

Not 100% sure to be honest.  It is certainly a fantastic tool that helps you run your apps easily and quickly.  But there is a nagging sensation in the back of my head that it is yet another service dependency that will have its share of downtime and issues and that might complicate things.  But I guess you could say that about any additional bit of technology you introduce and take a dependency on. That said, traffic to and from your apps is not going through Docker Cloud, traffic goes direct to your nodes in Azure so if they have brief downtime your app should continue to run just fine.

I have said that for the size we currently are and with the team focusing on building product that we might consider something else only once we can do a better job that it does for us.

We might consider something else only once we can do a better job that it does for us.

All in all I think Docker Cloud has a lot of great things to offer.   It will be interesting to compare and contrast these with the likes of Azure’s new service, Azure Container Service (ACS) as it matures and approaches General Availability.  It’s definitely something we will look at also.

-CJ

Microsoft Graph and what’s changing…

One of the pet peeves we heard no end of complaints about when i was at MS was the lack of a decent Change Log. Even just one targeted at developers. It might seem strange to people why MS haven’t done that for the whole of Office 365, but lets just say its a really hard nut to crack.

That said the great team working on the Microsoft Graph (graph.microsoft.com) APIs have been doing a nice job on the Office dev blogs keeping us up to date on changes going on in the run up to the APIs release last week.

One of the other things I have been doing to see what is changing in the API as it’s been changing is to keep an eye on the $metadata endpoint. $metadata describes all the entity types, properties and relationships etc…

It’s available via the API when you call: https://graph.microsoft.com/beta/$metadata

The document can be quite large and manually finding changes in it is pretty painful. Diff to the rescue! You can use your favorite diff tool to see what has changed from one copy to the next.

GitHub also does reasonable diffing. I am going to try and keep a copy of the $metadata document in GitHub that people can take a look at.

The repo is here: https://github.com/LoungeFlyZ/MSGraphMetadata

Here is a link to the current diff between the /v1 API and the /beta API as of Nov 23 2015.

https://github.com/LoungeFlyZ/MSGraphMetadata/commit/207346e2daed2d5553d6e655f8d9c3da4a33f830#diff-d6a067e9fd824380cf1a162010b163d6

MS Graph Diff

Simplifying Office 365 Unified API calls with Postman and OAuth 2

The Office 365 Unified API at graph.microsoft.com is a nice API to work with Azure AD and Office 365 from a single API endpoint. Authorized via OAuth 2 flows and all REST/JSON etc… Pretty much as you would expect as a developer.

There are a few ways to play around with the API.

Simplest: Graph Explorer

Harder: Use a tool like Postman

Postman is pretty slick. It lets you craft HTTP requests, their headers, parameters, body etc… and get responses back formatted in various ways. Postman 3 also supports OAuth 2 flows to help simplify the process of authenticating against and API, so you dont need to do all the various hops and token copying between requests.

OAuth 2 + Postman + Office 365 unified API

Here is how it works.

1. Go install postman 3 first

2. Set up a GET request to get your profile details from Azure AD

GET Me

3. In the authorization area pick OAuth 2 from the dropdown

OAuth2

4. Next you need to go and register an app, if you haven’t already, in order to get a Client ID and Secret. There are instructions on doing that here.

Note: for the REPLY URL field you need to specify: https://www.getpostman.com/oauth2/callback

When complete make a note of the client id and secret as you will need them shortly.

5. Back in Postman enter the following details for each of the OAuth parameters:

Authorization URL: https://login.windows.net/common/oauth2/authorize?resource=https%3A%2F%2Fgraph.microsoft.com
Access Token URL: https://login.windows.net/common/oauth2/token
Client ID: (the one you got in the previous step)
Client Secret: (the one you got in the previous step)

Notice at the end of the Authorization URL you need to include the “resource” parameter. This is required with O365 and indicates what endpoint you are trying to get access to.

6. Click the “Get access token” button to initiate the authentication and authorization flow. Postman will pop up a window that will direct you to log into Office 365 and let you consent to the application being given the appropriate privileges.

When complete you will see the OAuth access token, scopes etc… that were returned.

AccessTokens

Type in a name for this token and save it. Then for all subsequent requests you can attach that token to your request like this.

1. make sure your URL is set
2. attach the token to the header of the request
3. execute the request

MeR equest Results

All things going well you will get back a nice JSON response with your profile information included.

Hopefully helps simplify calling the graph.microsoft.com endpoint, playing with requests and not having to deal with all the icky OAuth goo along the way.

Happy coding!

Bro Down

Big Flashing DevOps Thing with Travis CI and Raspberry Pi 2

A while back I heard about Big Flashing DevOps Thing which shows you how to build a LED sign using a Raspberry Pi + BlinkyTape to display the current status of your build/deployments.  Pretty cool!

Big Flashing DevOps Thing

But at Hyperfish we are using Travis CI for our builds/tests, not Jenkins that the provided code currently supports.

Time to tweak!

The source for the original project is available on Github here: https://github.com/muce/SAWS and includes most of what you already need.  It has two parts to it:

  1. bash scripts that you schedule to download the status of a project and log it to a file
  2. a python script that reads the log files and updates the sign appropriately

To get this working with Travis CI it was a fairly simple job of tweaking a copy of the existing bash script to download the Travis build status.  Travis provides both XML or JSON feeds secured with a simple authentication token on the query string. Example below:

<Projects>
 <Project
 name="Hyperfish/hyperfish.com"
 activity="Sleeping"
 lastBuildStatus="Success"
 lastBuildLabel="48"
 lastBuildTime="2015-09-25T20:47:25.000+0000"
 webUrl="https://magnum.travis-ci.com/Hyperfish/hyperfish.com" />
</Projects>

The bash script simply downloads this feed, parses out the relevant information and then writes the appropriate color settings for the LEDs to a log file.  If the script sees the build is working it writes a number corresponding to Green in the log file.  It does this for however many repos you want to monitor.

The full travis script is here: https://github.com/LoungeFlyZ/SAWS/blob/master/travis.sh

(I have submitted a Pull Request to the original repo to add this support for Travis too)

Here is a short video of my BlinkyTape updating status and setting the lights.  In my case I set it up to monitor 3 repos in Travis, so the tape is divided into three sections to show status for each one.

Build status showing via BlinkyTape and Raspberry Pi 2

A video posted by Chris Johnson (@loungeflyz) on

This all requires a working Raspberry Pi and a basic knowledge of running scripts in Linux.  In my case I am running it with Raspbian (a Linux distro for the Pi).

Features I would add to SharePoint 2010

Not a typo in the title 🙂

Back in 2007 I was looking to move to the US and join the SharePoint engineering team.  As part of the early initial email based discussions about the role the guy I was talking to about a job asked me to list off some things I think were needed for SharePoint development.

I recently stumbled on the OneNote page where I was jotting down some notes before I sent him the email.

Here they are in all their 2007 glory (verbatim minus the bolding):

  1. Remotable APIs … Or API set that wraps the Web Services
  2. Integration of the FrontPage RPCs into the OM
  3. Web Services that take/return object friendly OM friendly data …datasets etc…  + keep XML document based ones.
  4. Ability to create “strongly typed” objects from SharePoint objects… Like strongly typed datasets

Here is my take on where we ultimately ended up for each of these:

#1 – Remotable APIs … this would become the Client Side Object Model in SharePoint 2010.
#2 – FP RPCs … this would be the ability to better interact with files and folders etc… via the CSOM.
#3 – Web Services with friendly data – this would become the first ListData.svc REST/OData endpoint in SharePoint 2010.
#4. – Strongly typed objects – this would become the work we did on Linq to SP  and SPMetal

Finding these notes was pretty nostalgic really!  It’s hard to imagine development in SP without some of these things now … especially the REST services considering how far they have come now!

It’s also interesting in that these were really the first investments in helping developers get code off the SharePoint server. Before this as a developer you really had to create and deploy your own web services and use the SP server side object model (yuck!).

What would be the four things you would write on your list today? Comment below!