Friday, 23 June 2017

All hail the new Shiny Office Seller Dashboard


Finally and after almost an eight months wait since the first time I learned that office store Seller dashboard will be merged with other developer stores to become one single Microsoft Developer dashboard.

I was checking my Seller dashboard account as usual and I noticed that the new dashboard has been rolled out and now I can access all the apps in one central location.

My first interaction with the Seller dashboard was back in late 2012 when Office store was a beta version. I remember that I had two major issues when it comes to the provider/developer experience these issues caused me a great amount of frustration.

The first issue- If your add-in is free, you will have absolutely no idea who downloaded the add-in but if it's paid you can have a limited information extracted from the sales report which gives you a very simple tabular view of the sales transactions. This information includes only the following:
(Market, Country, State if the buyer within the US market and the local currency purchase amount)

There was no single way of providing any kind of information about the acquisition any kind of user contact details.

With the new Developer dashboard an additional option has been added to the add-ins. This option allows the add-in provider to store the lead information into a target system of choice by a simple click of a button (Edit Lead Configuration). This option is available for both Free and Paid applications.

The available targets for the lead information are:

  1. Dynamics CRM Online
  2. SalesForce
  3. Azure Table 
  4. Marketo
  5. AzureBlob 
To be honest seeing this option after all these years makes me super excited.  Now Office Add-in providers can use the store as a proper lead generation tool . They can be more proactive and contact application consumers. Understand more why conversion rate for a specific application is low,and seek a proper feedback in order to improve and provide a better service.

In addition to these benefits, having such mature platform will improve the quality of the add-ins listed on the store as the providers now can use it as a proper marketing tool.

The second bit- I always find the existing reports very basic and gives me only some limited metrics (view/download/purchase/trial) which spans only the current week and the past three weeks. There was no way  to see my add-in performance this quarter vs. same quarter last year unless I manage somehow to store the data somewhere else. 

The new dashboard has a new report called acquisitions which unfortunately, I couldn't make it to work (currently I'm getting a blank page) but I presume that this report will answer  many questions I have. If you still need access to the old reports you can access them by viewing the legacy reports.



Another exciting part is having "Teams App" as an additional app type that you can submit to the office store. Although,  in the Office store website there is no actual category for Teams Apps yet.

 



Friday, 16 June 2017

SharePoint webhooks: the good, the bad and the ugly

I'm  alway a fan of separation of concerns as it's simplify and abstract development. The introduction of the App model in SharePoint 2013 was a huge step forward to run my custom code in an isolated sandboxed environment where I can easily debug and troubleshoot.

Moving forward I started working with SharePoint online and I become a big fan of using SharePoint remote event receivers it gives me total control on how to build things, what I liked about remote event receivers that there is no retries, simply SharePoint Online is triggered the endpoint call to WCF service and doesn't really bother with any response back. It's totally up to you to build your fail over mechanism which can be an  ECM custom action to resend the same message again to the service endpoint.

The only exception to the no-retries limitation is the AppInstalled Remote Event Receiver which retries for 3 times before it gives up.

With SharePoint Webhooks, it's completely different

"The Good"- Your endpoint has to be verified at creation time

Some people including me would argue that it's a good practice to verify the endpoint at the creation time which enable you to make sure that you don't register an invalid URL

"The Good"- it's basically a HTTP POST request to the notification client endpoint

This one is a major advantage as it makes it easier to implement than the old WCF service endpoint which allows developers to build notification clients using their own tool of choice
**Although I've built a Remote Event Receiver endpoint using nodeJS and wcf.js

"The Bad"- You can't register the webhook for a specific library event
I see this as a big disadvantage, I need only to receiver notification when a specific event occurs to the resource (the list) I don't really care about other events, I don't need to receive all these noise from SharePoint online.

"The Bad"- You need to keep your webhook alive
for some bizarre reason, webhooks will expire after a period of time (6 months) so your application need to update the registered subscriptions and extend the activation.

"The Ugly"-In short the notification message  is basically  useless
the notification message  consist of one or more of the following notification object
         "subscriptionId":"91779246-afe9-4525-b122-6c199ae89211",
         "clientState":"00000000-0000-0000-0000-000000000000",
         "expirationDateTime":"2016-04-30T17:27:00.0000000Z",
         "resource":"b9f6f714-9df8-470b-b22e-653855e1c181",
         "tenantId":"00000000-0000-0000-0000-000000000000",
         "siteUrl":"/",
         "webId":"dbc5a806-e4d4-46e5-951c-6344d70b62fa"
As you can easily tell, you can only reconstruct the resource object, you have absolutely no clue why you got this notification which enable the resource (document library) to send a lot of unnecessary noise to your notification client. In order to get the changes, you have to call /GetChanges library endpoint to understand why you received the notification object and decide wether to act or not.

To be quite frank, Microsoft Graph webhooks is done in a very neat way, the only draw back is it expires within 70 hours.


Tuesday, 18 April 2017

Add more Smarts to your bot:Detecting emotions from giphy posts


I've been blogging about bots since April, 2016 which was about the time I discovered Microsoft's amazing botframework. I've written a series of how to detect user intent based on text messages using LUIS (Language Understanding Intelligent Service) you can find it here.

In this post I'll talk about understanding user emotion using the Giphy posts embedded within Microsoft teams. This can be generalized to any image communication between you and the bot.

First Let's get the Image Content

Using Microsoft teams insert Giphy functionality we can see that it adds an attachment to the communication message between the user and the bot, this attachment is a mere link to the chosen Giphy.



Let's add the image understanding capability to our bot

We will use emotion detection service which is a part of cognitive service to detect the giphy emotion which is basically a POST request to emotion detection service with the subscription key added as a "Ocp-Apim-Subscription-Key"  header

 request(
     {
         url: 'https://westus.api.cognitive.microsoft.com/emotion/v1.0/recognize',
                method: 'POST',
                headers: {
                    'Ocp-Apim-Subscription-Key': '*Add Your Subscription key here*',
                    'Content-Type': 'application/json'
                },
                body: {
                    'url': 'your Giphy image URL'
                },
                json: true
            }, function (errresponsebody) {
                if (err) {
                    callback(errnull);
                }
                //successful call
                callback(nullgetHighScoreEmotion(body));
            });

the successful response will contain an object with score of each possible emotion, I've created a function that returns the highest emotion and then send a message to the user to reflect the emotion detection.
function getHighScoreEmotion(body) {
    var val=0;;
    var emotion;
    if (body.length > 0) {
        for (score in body[0].scores) {
            if(body[0].scores[score]>val){
                val=body[0].scores[score];
                emotion=score;
            }
        }
        return emotion;
    }
    return null;
    
}

and now how it's look like when you send your bot a giphy :)


Now our bot can understand and response to the Giphys shared by the user 

Wednesday, 22 February 2017

Botframework: Building a proactive Bot

In this post I'll walk you through a quick demo I prepare for MS ignite Australia 2017. Although it was the last demo and I didn't have the chance to actually make it work in front of a lot of live audience. I decided -what the hell- I'm going back home and I'll record this sh*t and I'll put it out there.

First thing you need to know, is to make the bot start a conversation or send a message proactively, you need to save the conversation address object which consist of the following:
  • Bot Object
    • Bot ID
    • Bot Name
  • User Object
    • User ID (base64 encoded value not email)
  • Service URL ( points to local host when running the bot using the emulator)

from the above it's hard to reconstruct the address object, so I start looking at a way of saving this address. There is couple of events triggered:
  1. conversationUpdate: When a user or bot start exchanging messages
  2. contactRelationUpdate: When a user add bot to his/her contact list

I see the second event is more convenient to tab into and to store the user address somewhere. 
Once you stored the user address you can either create a new conversation or send a new message within an existing conversation. This message can be triggered by any external event and can be published as another end-point of the bot service itself.
Note: you can easily construct the address if you are working with the botframework emulator

And this is how it looks like

Tuesday, 7 February 2017

SharePoint Framework: Multiple webpart instances within the same page Angular2


In August 2016, I've added a quick guide to how to build an angular2 webpart using the awesome new -back then- SharePoint framework
http://www.sharepointtweaks.com/2016/08/sharepoint-framework-angular2-sample.html
also it was basically to demonstrate what can be done in a context of a github issue https://github.com/SharePoint/sp-dev-docs/issues/7.

As Andrew Connell pointed out it's rather an angular limitation , which if we search for any work around , we can easily find that there is a workaround shared by Christoph Krautz here https://github.com/angular/angular/issues/7136


Sounds easy right. However, trying out this workaround in the SPFx world isn't that straight forward. You will get an error as the first dependency of your AppModule  is not recognized by CompileMetadataResolver


My first thought was, how can I get the componentFactoryResolver without even passing it. I've used the _componentFactoryReolver member of the ApplicationRef object

Now I can create the Factory and update the selector to match the webpart ID.
My second problem is how to distinguish different webparts if I passed the selector to the module constructor it will have the same value for all the webparts on the page. which leads also to only single bootstrapped webpart.
I've added an id to the main component to use it in addition to the tagName as selector and used the description field to represent the id value
However, that didn't solve the problem as the value injected to the AppModule constructor was the same.

What to do next?? I ran out of ideas, not really I came up with a stupid one but it works  instead of the selector I've passed the Document object and in the consturctor I'll search all the elements that matches the webpart main component selector and voila!
it works !



DISCLAIMER :  this is a hack for experiment purpose only I'm no Angular2 expert I'm actually learning how to use this thing at the moment of writing these words.

the code can be found https://github.com/ministainer/Angular2MultipleSampleSPFx

Thursday, 5 January 2017

Inconvenient license verification in Office Store


A Licensing validation challenge

I have a single non-free add-in listed on the office store, I noticed that once the add-in trail is over it will still function as a full version. The Office store licensing framework won't remove the add-in from the user's available add-ins.

It's totally up to the add-in developer to limit the functionality of the add-in using store license verification endpoint.

The license token is issued and passed to the add-in as a query parameter ?et, you can easily get the license token which will be a base64 encoded in the case of Office Add-ins and URL encoded in outlook add-ins case.







Interestingly enough, if you are using outlook web app the license token parameter will always be an empty string which is a known issue as Humberto Lezama pointed out on this StackOverflow thread

after I dive into the code, I find out that the reason for this is the token is never retrieved only the the add-in manifest file is retrieved as the script mistaken the store type to be "Exchange" instead of "OMEX" (Office Marketplace Experience)


tracing back store type value, i find out that it has been set as hardcoded value in https://r1.res.office365.com/owa/prem/16.1569.8.2186260/scripts/microsoft.owa.extensibilitynext.js
as Exchange regardless of the source of the add-in




However, by correcting the value to omex I faced the below error



Build your own licensing model 

Instead of relying on the Office store licensing model you can list your add-in as a free add-in on the Office store and build your own licensing model.
building your own licensing framework is not an uncommon practice, one of the most popular apps on the store -Nintex Workflows for Office 365- is using a similar approach.


Thursday, 24 November 2016

Event Driven Development in Office 365

The beginning: Event Receivers


Since the early days of SharePoint on-premises and we can easily register event receivers on the web, list and item levels to trigger custom actions when a particular event occurs. These abilities were moved along to SharePoint online where the custom action was hosted in a remote endpoint (Remote Event Receivers).
You can register event receivers based on the list template or item content type. Similar technique exists in Project Server and Project Online with a slight difference in naming (Event Handlers).  So basically we've been doing event driven development in SharePoint Online since the beginning of Office 365.

What's not good about Event Receivers

1.       Unlike SharePoint on-Premises remote event receivers is loosely coupled from SharePoint online and if the event receiver endpoint is down for some reason SharePoint online won't retry executing the event again with the exception of all App related remote event receiver.
2.       Remote event receivers and Project online event handlers are built as WCF endpoints so SharePoint online and Project online send SOAP message to these endpoint (not very portable huh!)

A Whole new world!


Microsoft is gradually replacing traditional remote event receivers by one of the following options:


Microsoft Flow

Easy to use tool with a nice user-friendly interface which allows super users and IT pros to build event driven scenarios with more than 446 templates and 85+ triggers including generic HTTP endpoint which opens the door for unlimited possibilities. Microsoft flows offer an admin interface via https://flow.microsoft.com where the IT pros can build self-service flows using the browser interface.


Azure Logic Apps


Microsoft flows is built on top of azure logic apps. They both have the same designer and same list of connectors. Azure logic apps is the preferable option in B2B mission critical scenarios. Azure logic apps is managed like any other azure service via Azure portal. Flows can be designed using browser or Visual Studio using azure logic apps extension for visual studio
which requires Azure Resource manager SDK to be installed first

here is the designer within Visual studio 2015.


Webhooks

Webhooks is another option which also more suitable for developers and B2B applications. Webhooks gives you a mean to build an event driven application on a massive scale without the need to have an azure subscription. Webhooks introduced first within Microsoft graph outlook resources like mail, contact, and calendar. In addition to Microsoft Graph resources, it has been introduced to SharePoint as well.
You can create a subscription via a simple POST request


 Webhooks will enable you to build more complex solutions where the subscription or the flow-trigger can be created on the fly based on another trigger.