Category Archives: Development

So long Windows Phone, hello more cross plat!

My Trips user:  The My Trips Live Tile doesn’t work on Windows 10 Mobile
Me: Strange let me look into it.
MS: Yes, you are hitting a bug in Win 10 Mobile for older phone apps
Me: When are you going to fix it?
MS:  Never, please rewrite your apps to work around it.

This was a recent exchange I had regarding a problem with My Trips live tile.  It’s a painful problem for my ardent users of My Trips, many of whom have been using it since I first brought it out for Windows Phone 7 many years ago.

It’s an interesting predicament for me. I appreciate the users of My Trips, but I don’t appreciate being told I need to rewrite part of the app to use new APIs to work around this bug.  Microsoft has always been very focused on backwards compatibility and this flies in the face of that.  Sure, my app code is old.  I wrote it with C# and XAML but back when Windows Phone used a Silverlight app model … so it is not a more modern Windows universal app.

What ticks me off is I have supported the platform for years and created a reasonably popular app that filled one of the app gaps in their ecosystem. You would think that given the widening app gap that MS would bend over backwards to ensure app developers were sticking on the platform.  It seems not. It sounds like there are very few apps hitting this bug and for that reason they are not going to fix it. Although this irritates me, im sure it irritates and frustrates the users more.

For me the equation is pretty simple.  Time to update the app vs. return, and to be honest it doesn’t make sense given the time I would need to put in to change this code.

Side note for those interested on the code changes needed: It’s not a couple of lines of code to change, it’s quite a bit.  It’s changing how the app generates tiles and how it registers for push notifications from Microsoft.

But I am conflicted. Because I know there are still die hard Windows 10 mobile fans out there that will not like me for abandoning the app. That pains me.

I have wanted to update My Trips for Windows 10 (for desktops, tablets etc…) for a while and I would much rather put effort into that. Now that Xamarin is in the MS family and I could quite easily target a version for iOS and Android at the same time whilst sharing 80% of the code.

Now that Xamarin is in the MS family and I could quite easily target a version for iOS and Android at the same time whilst sharing 80% of the code.

So with all this said I am pretty sure I am leaving the My Trips mobile code alone to wither and die.  It’s been a great ride.  I have wanted to bring it to iOS for a long time, especially since I’m an iOS user these days and I have had a bunch of requests for it.  So hopefully I can get that done and keep My Trips an app with a future.

 

-CJ

Running apps using Docker Cloud (aka Tutum)

Anyone who has listen to me rant on about how interesting Docker is on the Microsoft Cloud Show may have caught me talking about Tutum.  

imageThe short story on Tutum is that it provides an easy to use management application over Virtual Machines that you want to run your apps on with Docker.  It is (sort of) cloud provider agnostic in that it supports Amazon Web Services, Microsoft Azure and Digital Ocean among others.

It was bought by Docker late last year and recently was recently re-released as Docker Cloud

What does it provide?

At a high level you still pay for your VMs wherever you host them, but Docker Cloud provides you management of them for 2c an hour (after your first free node) no matter how big or small they are.   You write your code, package it in a Docker Image as per usual and then use Docker Cloud to deploy containers based on those docker images to your Docker nodes. You can do this manually or have it triggered when you push your image to somewhere like Docker Hub as part of a continuous integration set up.

Once you have deployed your app (“Services” in Docker Cloud terminology) you can use it to monitor them, scale them, check logs, redeploy a newer version or turn them off etc…  They provide an easy to use Web App, REST APIs and a Command Line Interface (CLI).

So how easy is it really?

Getting going …

The first thing you have to do is connect to your cloud provider like Azure.  For Azure this means downloading a certificate from Docker Cloud and uploading it into your Azure subscription.  This lets Docker Cloud use the Azure APIs to manage things in your subscription for you. (details here)

Once you have done that you can start deploying Virtual Machines, “Nodes” in Docker Cloud terminology.  Below I’m creating a 2 node cluster of A2 size in the West US region of Azure. 

image

That’s it.  Click “Launch node cluster” wait a few mins (ok quite a few) and you have a functional Docker cluster up and running in Azure.

image

In Azure you can take a look at what Docker Cloud has created for you.  Note that as of the time of writing that Docker Cloud is provisioning “Classic” style VMs in Azure and not using the newer ARM model.  They also deploy different VMs into their own cloud services and resource groups which isn’t good for production.  That said, Docker Cloud let you Bring Your Own Node (BYON) which lets you provision the VMs however you like, install the Docker Cloud agent on them and then register them in Docker Cloud.  Using this you can deploy your VMs using ARM in Azure and configure them however you like.

image

Deploy stuff …

Now you have a node or two ready you can start deploying your apps to them!  Before you do this you obviously need to write your app … or use something simple like a pre-canned demo Docker Image to test things out.

Docker Cloud makes this really simple through “Services”.  You create a new service, tell it where it should pull the Docker Image from and a few other configuration options like Ports to map etc… Then Create and your containers will be deployed to your nodes.

Try this once you have your nodes up and running.  Click Services in the top nav,  then Create Service. Under Jumpstarts & Miscellaneous category you should see the “dockercloud/hello-world” image. Select it and then set it up like this:

image

There are only a two things I changed from the default setup.

  1. I moved the slider to 2 in order to deploy 2 containers
  2. Mapped Port 80 of the container to Port 80 of the node and clicked Published.  This maps port 80 of the VM to port 80 of the container running on it so that we can hit it with a web browser.
  3. High availability in the deployment strategy.  This will ensure that the containers are spread across available nodes vs. both on one.

Click “Create and deploy” and you should see your containers starting up.   Pretty simple huh!

image

Note: There is obviously a lot more available via configuration for things like environment variables and volume management for data that you will eventually need to learn about as you develop and deploy apps using Docker.

Once your containers are deployed you will see them move to the running state:

image

Now you have two hello world containers running on your nodes.  If you go back to your list of Nodes you should see 1 container running on each:

image

I want to see the good man!

You can test your hello world app out by hitting its endpoint.  You can find out what that is under the Service you created in the Endpoints information.

image

  1. This is the service endpoint.  It will use DNS round robin to direct requests between your two running containers.
  2. These are the individual endpoints for each container.  You can hit each one independently.

Try it out!  Open the URL provided in a browser and you should see something like this:

image

Note that #1 will indicate what container you are hitting.

Want more containers?  Go into your Service and move the slider and hit apply.

image

You will get an error like this:

image

This is because we mapped port 80 of the Node to the Container and you can only do that mapping once per Node/VM. i.e. two containers cant both be listening on port 80.  So unless you use a HAProxy or similar to load balance your containers you will be limited to one container on each node mapped to port 80.  I might write up another post about how to do this better using HAProxy.

Automate all the things …

We are a small company and we want to automate things as much as possible to reduce the manual effort required for mundane tasks.  We have opted to use Docker Cloud for helping us deploy containers to Azure as part of our continuous integration and continuous deployment pipeline.

In a nutshell when a developer commits code it goes through the following pipeline and automatically is deployed to our staging environment:

  1. Code is committed to GitHub
  2. Travis-CI.com is notified and it pulls the code and builds it.  Once built it creates a Docker Image and pushes it to Docker Hub.
  3. Docker Cloud is triggered by a new image.  It picks it up and redeploys that Service using the new version of the image.

This way a few minutes after a developers commits code the app has been built and deployed seamlessly into Azure for us.  We have a Big Dev Ops Flashing Thing hanging on the wall telling us how the build is going.

Cool … what else …

At Hyperfish we have been using Tutum for a while during its preview period with what I think is great success.  Sure, there have been issues during the preview, but on the whole I think it has saved us a TON of time and effort setting up and configuring docker environments.  Hell I am a developer kinda guy, not much of a infrastructure one and I managed to get it working easily which I think is really saying something 🙂

Is this how you will run production?

Not 100% sure to be honest.  It is certainly a fantastic tool that helps you run your apps easily and quickly.  But there is a nagging sensation in the back of my head that it is yet another service dependency that will have its share of downtime and issues and that might complicate things.  But I guess you could say that about any additional bit of technology you introduce and take a dependency on. That said, traffic to and from your apps is not going through Docker Cloud, traffic goes direct to your nodes in Azure so if they have brief downtime your app should continue to run just fine.

I have said that for the size we currently are and with the team focusing on building product that we might consider something else only once we can do a better job that it does for us.

We might consider something else only once we can do a better job that it does for us.

All in all I think Docker Cloud has a lot of great things to offer.   It will be interesting to compare and contrast these with the likes of Azure’s new service, Azure Container Service (ACS) as it matures and approaches General Availability.  It’s definitely something we will look at also.

-CJ

Microsoft Graph and what’s changing…

One of the pet peeves we heard no end of complaints about when i was at MS was the lack of a decent Change Log. Even just one targeted at developers. It might seem strange to people why MS haven’t done that for the whole of Office 365, but lets just say its a really hard nut to crack.

That said the great team working on the Microsoft Graph (graph.microsoft.com) APIs have been doing a nice job on the Office dev blogs keeping us up to date on changes going on in the run up to the APIs release last week.

One of the other things I have been doing to see what is changing in the API as it’s been changing is to keep an eye on the $metadata endpoint. $metadata describes all the entity types, properties and relationships etc…

It’s available via the API when you call: https://graph.microsoft.com/beta/$metadata

The document can be quite large and manually finding changes in it is pretty painful. Diff to the rescue! You can use your favorite diff tool to see what has changed from one copy to the next.

GitHub also does reasonable diffing. I am going to try and keep a copy of the $metadata document in GitHub that people can take a look at.

The repo is here: https://github.com/LoungeFlyZ/MSGraphMetadata

Here is a link to the current diff between the /v1 API and the /beta API as of Nov 23 2015.

https://github.com/LoungeFlyZ/MSGraphMetadata/commit/207346e2daed2d5553d6e655f8d9c3da4a33f830#diff-d6a067e9fd824380cf1a162010b163d6

MS Graph Diff

Simplifying Office 365 Unified API calls with Postman and OAuth 2

The Office 365 Unified API at graph.microsoft.com is a nice API to work with Azure AD and Office 365 from a single API endpoint. Authorized via OAuth 2 flows and all REST/JSON etc… Pretty much as you would expect as a developer.

There are a few ways to play around with the API.

Simplest: Graph Explorer

Harder: Use a tool like Postman

Postman is pretty slick. It lets you craft HTTP requests, their headers, parameters, body etc… and get responses back formatted in various ways. Postman 3 also supports OAuth 2 flows to help simplify the process of authenticating against and API, so you dont need to do all the various hops and token copying between requests.

OAuth 2 + Postman + Office 365 unified API

Here is how it works.

1. Go install postman 3 first

2. Set up a GET request to get your profile details from Azure AD

GET Me

3. In the authorization area pick OAuth 2 from the dropdown

OAuth2

4. Next you need to go and register an app, if you haven’t already, in order to get a Client ID and Secret. There are instructions on doing that here.

Note: for the REPLY URL field you need to specify: https://www.getpostman.com/oauth2/callback

When complete make a note of the client id and secret as you will need them shortly.

5. Back in Postman enter the following details for each of the OAuth parameters:

Authorization URL: https://login.windows.net/common/oauth2/authorize?resource=https%3A%2F%2Fgraph.microsoft.com
Access Token URL: https://login.windows.net/common/oauth2/token
Client ID: (the one you got in the previous step)
Client Secret: (the one you got in the previous step)

Notice at the end of the Authorization URL you need to include the “resource” parameter. This is required with O365 and indicates what endpoint you are trying to get access to.

6. Click the “Get access token” button to initiate the authentication and authorization flow. Postman will pop up a window that will direct you to log into Office 365 and let you consent to the application being given the appropriate privileges.

When complete you will see the OAuth access token, scopes etc… that were returned.

AccessTokens

Type in a name for this token and save it. Then for all subsequent requests you can attach that token to your request like this.

1. make sure your URL is set
2. attach the token to the header of the request
3. execute the request

MeR equest Results

All things going well you will get back a nice JSON response with your profile information included.

Hopefully helps simplify calling the graph.microsoft.com endpoint, playing with requests and not having to deal with all the icky OAuth goo along the way.

Happy coding!

Bro Down

Big Flashing DevOps Thing with Travis CI and Raspberry Pi 2

A while back I heard about Big Flashing DevOps Thing which shows you how to build a LED sign using a Raspberry Pi + BlinkyTape to display the current status of your build/deployments.  Pretty cool!

Big Flashing DevOps Thing

But at Hyperfish we are using Travis CI for our builds/tests, not Jenkins that the provided code currently supports.

Time to tweak!

The source for the original project is available on Github here: https://github.com/muce/SAWS and includes most of what you already need.  It has two parts to it:

  1. bash scripts that you schedule to download the status of a project and log it to a file
  2. a python script that reads the log files and updates the sign appropriately

To get this working with Travis CI it was a fairly simple job of tweaking a copy of the existing bash script to download the Travis build status.  Travis provides both XML or JSON feeds secured with a simple authentication token on the query string. Example below:

<Projects>
 <Project
 name="Hyperfish/hyperfish.com"
 activity="Sleeping"
 lastBuildStatus="Success"
 lastBuildLabel="48"
 lastBuildTime="2015-09-25T20:47:25.000+0000"
 webUrl="https://magnum.travis-ci.com/Hyperfish/hyperfish.com" />
</Projects>

The bash script simply downloads this feed, parses out the relevant information and then writes the appropriate color settings for the LEDs to a log file.  If the script sees the build is working it writes a number corresponding to Green in the log file.  It does this for however many repos you want to monitor.

The full travis script is here: https://github.com/LoungeFlyZ/SAWS/blob/master/travis.sh

(I have submitted a Pull Request to the original repo to add this support for Travis too)

Here is a short video of my BlinkyTape updating status and setting the lights.  In my case I set it up to monitor 3 repos in Travis, so the tape is divided into three sections to show status for each one.

Build status showing via BlinkyTape and Raspberry Pi 2

A video posted by Chris Johnson (@loungeflyz) on

This all requires a working Raspberry Pi and a basic knowledge of running scripts in Linux.  In my case I am running it with Raspbian (a Linux distro for the Pi).

Features I would add to SharePoint 2010

Not a typo in the title 🙂

Back in 2007 I was looking to move to the US and join the SharePoint engineering team.  As part of the early initial email based discussions about the role the guy I was talking to about a job asked me to list off some things I think were needed for SharePoint development.

I recently stumbled on the OneNote page where I was jotting down some notes before I sent him the email.

Here they are in all their 2007 glory (verbatim minus the bolding):

  1. Remotable APIs … Or API set that wraps the Web Services
  2. Integration of the FrontPage RPCs into the OM
  3. Web Services that take/return object friendly OM friendly data …datasets etc…  + keep XML document based ones.
  4. Ability to create “strongly typed” objects from SharePoint objects… Like strongly typed datasets

Here is my take on where we ultimately ended up for each of these:

#1 – Remotable APIs … this would become the Client Side Object Model in SharePoint 2010.
#2 – FP RPCs … this would be the ability to better interact with files and folders etc… via the CSOM.
#3 – Web Services with friendly data – this would become the first ListData.svc REST/OData endpoint in SharePoint 2010.
#4. – Strongly typed objects – this would become the work we did on Linq to SP  and SPMetal

Finding these notes was pretty nostalgic really!  It’s hard to imagine development in SP without some of these things now … especially the REST services considering how far they have come now!

It’s also interesting in that these were really the first investments in helping developers get code off the SharePoint server. Before this as a developer you really had to create and deploy your own web services and use the SP server side object model (yuck!).

What would be the four things you would write on your list today? Comment below!

MS Cloud Show – Ep36 – An Interview with Rob Howard of the Office 365 Developer Platform Team

I had a chance to sit down with Rob Howard on the Office 365 Developer Platform team here at Microsoft and ask him about his history with SharePoint, what’s changing about the way MS builds products now we live in a services/cloud world and throw in a question about what the one thing he would change in SharePoint if he were to do it again.

Rob and I get to work together a lot at Microsoft and he is one of the nicest and most knowledgeable people on all things Office 365 / SharePoint development.

http://www.microsoftcloudshow.com/podcast/Episodes/036-interview-with-rob-howard-of-the-office-365-api-team

-CJ

SharePoint/Office 365 at Build 2014

I’m heading to the Build conference next week in San Francisco, YAY my first //build ever.  We have a bunch of great developer related content for SharePoint and Office 365 there.  Don’t worry if you have not developed for SharePoint or 365 before, there is content to get you up to speed!

Here are the sessions:

Building Connected Productivity Apps Brian Jones
Office Power Hour – New developer APIs and features for Apps for Office Rolando Jimenez Salgado
SharePoint Power Hour – New developer APIs and features for Apps for SharePoint Rob Howard
SharePoint 2013 Apps with AngularJS Jeremy Thake
Building Enterprise Social Apps with Yammer Jose Juarez Comboni
The brand new OneNote service – reach the massive user base with your apps. Gareth Jones
Developing Office 365 Cloud Business Apps Dan Fernandez
Deep Dive into Mail Compose Apps APIs Andrew Salamatov

Also Office 365 will have a booth so come and visit the booth and connect with the speakers and the team from Redmond.

SharePint @ //build, Wed at 7pm @ Chieftain Irish Pub

see you there!

-CJ

Sandbox code is “deprecated”, long live the Sandbox

we have deprecated the use of custom managed code within the sandboxed solution – Brian Jones, Principal Program Manager, Apps for SharePoint

It’s been a long time coming and widely anticipated that this announcement would come at some point.  It’s great to see the announcement and the clarity many have been asking for.

I feel like I’m in a good position to comment on this and give some background about why I think deprecating code based sandbox solutions is good idea. I was on the SharePoint engineering team when the sandbox was being built and for a period of time I was the Program Manager for the feature.

Background:  Sandbox solutions in SharePoint were introduced in SharePoint 2010.  They allowed a packaged set of assets and code to the uploaded to a SharePoint site.  That can consist of declarative components like XML for adding things like List Templates, as well as compiled code for things like Web Parts or Event Receivers.

When the Sandbox was being designed and built it was about 2 – 3 years prior to SharePoint 2010 being released. Azure, and cloud computing in general, either didn’t exist or was in its infancy. SharePoint needed a way to upload customizations/components to SharePoint sites where the administrators were not comfortable with installing Farm Solutions aka. Full trust solutions. Microsoft itself was a perfect example of this.  If I built a web part I, as a Microsoft employee at the time, couldn’t load that onto our SharePoint sites that MS IT ran.  No 3rd party web parts or products. This was a common problem in many large organizations and we heard about this time and time again with customers.

Sandbox Solutions were the answer. They allowed users to upload a solution and have SharePoint run it while it being controlled, secured and run in a sandboxed process. The main thing this gave SharePoint was the ability to isolate the code in that solution and ensuring that if it crashed or was badly behaved that it didn’t break the rest of the SharePoint environment.

The problem was that Sandbox Solutions was a feature added in Windows SharePoint Services (WSS) and that was a different product from SharePoint Portal Server (SPS) that built on top of WSS.  The API set in the Sandbox that was available was limited to WSS APIs and even then only a subset of them. There were good reasons for this, but at the end of the day it was very limiting for people.  Ideally it would have been great to have lots of SPS APIs available too. But that didn’t happen (different story).

So that is the background about how/why Sandbox came about.

… now fast forward to today …

In a nutshell the Sandbox was a good solution to the problem faced when it was designed. However, it’s not a great solution to the problem given the technology we have today.

Why is it no good today you ask?

A lot happened after the Sandbox was designed and built.  Cloud computing took off, new advances in code in all sorts of places got easier e.g. isolated apps on a phone.  A lot was learnt. 

In short its my belief that SharePoint shouldn’t have been trying to replicate an isolated code hosting environment.  That is reinventing the wheel and there are other teams at MS who build products to do this extremely well already.  Namely IIS and Azure.

Think about that for a second.  Imagine being given some arbitrary code and told to run it, but doing it in a way that was safe, secure, manageable and fault tolerant.  It’s actually quite a tough challenge. If you say it’s easy then you should try doing it instead of talking about it 🙂 (tip with wider life applicability too)

So today MS clarified that SharePoint was getting out of the code hosting game.  Why?  Because it was limited and there are better solutions to this problem today. 

The new SharePoint app model is designed to solve this by moving “sandbox” code to an alternative host e.g. IIS or Azure or <insert thing that runs code here>.

Sandbox code might be “dead”, but the new app model IS the new Sandbox!

I see the reasons why the sandbox came about the same as I do for the new app model.  They are solving the same problem.  How do you allow someone to customize, extend and build new things on SharePoint without compromising the integrity of SharePoint itself?  That is the goal.

Today in SharePoint 2013 and Office 365 we have the ability to build solutions that use this new app model.  Sure, it’s not perfect in the APIs it provides and there is plenty of scope for adding things.  I am certain it will evolve to cater for more of these things over time.  That said, it is MUCH better suited for the long term.  I for one am loving the ability to use all the latest dev tools and technologies in Azure that were not possible in SharePoint previously.

The app model may not be applicable or possible to be used for everyone today and that is fine. It’s going to develop and that will change over time would be what I bet on.  But it is the right path moving forward.  This might cause some pain for people in the short term and I understand the frustration people have with changes like this. But I would MUCH rather have to deal with this change than be limited to a inferior set of capabilities in the longer term.  This is the right move long term (in my humble opinion).  Short term pain, long term gain.

Getting SharePoint out of the code hosting solution was the right things to do and I applaud the team for clarifying there position on this.

I look forward to the SharePoint Conference in March where hopefully (fingers crossed) we will hear more about the future of the new app model and how it will address its shortfalls today.

-CJ

Managing your Azure cloud costs with Kerrb

One of the big problems developers and organizations have using cloud services like Azure is the potential for the costs to go crazy if you don’t shut your dev, test or temporary Virtual Machines off. Sometime back Andrew Connell and I got talking about and had an idea for an online service that would help you manage those costs.  We talked with some people and found found loads of people that were concerned with using Azure and Amazon Web Services because of these cost overrun type of issues.

KerrbSo we decided to fix it …  Introducing Kerrb.

Kerrb is a SaaS product designed to save you money by automatically turning off Azure VMs that you forget about.  If you forget to turn off a virtual machine Kerrb will make sure it’s turned off on a schedule that you decide on.

Kerrb is still being built, but you can sign up for the launch list and be one of the first to get access when it is ready.  We will send you updates on how development is progressing and finally give those on the launch list the opportunity to sign up and test out the system when it’ ready. Also as an added bonus, if you are on the launch list then we will honor the pricing we have up on the site, even if we decide to tweak it prior to launch.

Kerrb will start small and evolve quickly as demand and feedback drives the product development. The high priority “Pri 0” [1]  feature is to turn off Virtual Machines in Azure if you forget, but we have a lot of other great features on the roadmap including adding Amazon Web Services as well as support for other leading cloud providers.

Keep up to date with developments and help us get the word out by:

  1. Signing up for the launch list
  2. Liking Kerrb on FaceBook
  3. Keeping an eye on the Blog for updates and news
  4. Follow @KerrbApp on Twitter

Have a read of a blog post Andrew wrote on the Kerrb blog here: Using VMs for Dev, Test & Show – Perspectives from an Indie Consultant, Trainer and Presenter

And something I wrote about Managing cloud spend in a development organization

We look forward to hearing your comments and feedback!

-CJ

[1] Pri 0 – Microsoft speak for the highest priority features in product development. You have to have all the Pri 0’s.