Saturday, 5 August 2017

Basic botframework Yeoman generator

I think it's about time to talk about botframework generator I wrote around Jan this year (2017). since April, 2016 I'm fascinated by what Microsoft's botframework can do.  I wrote a three part blog post- describing in details how to build an office assistant utilizing both Microsoft botframework and LUIS (language understanding intelligent services), for more details have a look here.

Earlier this year I was trying to speed up the scaffolding process of bot development using botframework and to be more precise I was focusing on the nodejs version of the sdk. In a day or so I ended up building a basic Yeoman generator that generates three types of sample bots which similar to the same choices you will get using the azure bot service.

The generator is not a very sophisticated one, in a matter of fact it's a very basic "attempt" so far and I to start adding more templates. One template I think about is a bot that is Office365-aware bot.

The node package is published almost 7 months ago @ and it can be installed using

After the installation what you really need is to create a new directory and run the following command

Then you will be guided through simple steps to generate the skeleton of your bot, it will first ask you to specify the name of the bot then the author
Then you will have to provide the location of the generated files.

after that, here comes the main decision which type of bot you want, which is one of three templates, for simplicity we will choose an echo bot which is basically reply back exactly the user's message.

 After the generator is done with generating the seed of your bot you can either update it add your own logic and make it more than just basic skeleton or you can easily choose to run the generated code and connect to it using the emulator and start playing around with it. If you choose to run it this what exactly you will get.

The end-to-end process will take less than 3 minutes to install, generate and run your bot. I'm currently in the process to add more templates to the generator , any contribution is very appreciated
code can be found at

Friday, 23 June 2017

All hail the new Shiny Office Seller Dashboard

Finally and after almost an eight months wait since the first time I learned that office store Seller dashboard will be merged with other developer stores to become one single Microsoft Developer dashboard.

I was checking my Seller dashboard account as usual and I noticed that the new dashboard has been rolled out and now I can access all the apps in one central location.

My first interaction with the Seller dashboard was back in late 2012 when Office store was a beta version. I remember that I had two major issues when it comes to the provider/developer experience these issues caused me a great amount of frustration.

The first issue- If your add-in is free, you will have absolutely no idea who downloaded the add-in but if it's paid you can have a limited information extracted from the sales report which gives you a very simple tabular view of the sales transactions. This information includes only the following:
(Market, Country, State if the buyer within the US market and the local currency purchase amount)

There was no single way of providing any kind of information about the acquisition any kind of user contact details.

With the new Developer dashboard an additional option has been added to the add-ins. This option allows the add-in provider to store the lead information into a target system of choice by a simple click of a button (Edit Lead Configuration). This option is available for both Free and Paid applications.

The available targets for the lead information are:

  1. Dynamics CRM Online
  2. SalesForce
  3. Azure Table 
  4. Marketo
  5. AzureBlob 
To be honest seeing this option after all these years makes me super excited.  Now Office Add-in providers can use the store as a proper lead generation tool . They can be more proactive and contact application consumers. Understand more why conversion rate for a specific application is low,and seek a proper feedback in order to improve and provide a better service.

In addition to these benefits, having such mature platform will improve the quality of the add-ins listed on the store as the providers now can use it as a proper marketing tool.

The second bit- I always find the existing reports very basic and gives me only some limited metrics (view/download/purchase/trial) which spans only the current week and the past three weeks. There was no way  to see my add-in performance this quarter vs. same quarter last year unless I manage somehow to store the data somewhere else. 

The new dashboard has a new report called acquisitions which unfortunately, I couldn't make it to work (currently I'm getting a blank page) but I presume that this report will answer  many questions I have. If you still need access to the old reports you can access them by viewing the legacy reports.

Another exciting part is having "Teams App" as an additional app type that you can submit to the office store. Although,  in the Office store website there is no actual category for Teams Apps yet.


Friday, 16 June 2017

SharePoint webhooks: the good, the bad and the ugly

I'm  alway a fan of separation of concerns as it's simplify and abstract development. The introduction of the App model in SharePoint 2013 was a huge step forward to run my custom code in an isolated sandboxed environment where I can easily debug and troubleshoot.

Moving forward I started working with SharePoint online and I become a big fan of using SharePoint remote event receivers it gives me total control on how to build things, what I liked about remote event receivers that there is no retries, simply SharePoint Online is triggered the endpoint call to WCF service and doesn't really bother with any response back. It's totally up to you to build your fail over mechanism which can be an  ECM custom action to resend the same message again to the service endpoint.

The only exception to the no-retries limitation is the AppInstalled Remote Event Receiver which retries for 3 times before it gives up.

With SharePoint Webhooks, it's completely different

"The Good"- Your endpoint has to be verified at creation time

Some people including me would argue that it's a good practice to verify the endpoint at the creation time which enable you to make sure that you don't register an invalid URL

"The Good"- it's basically a HTTP POST request to the notification client endpoint

This one is a major advantage as it makes it easier to implement than the old WCF service endpoint which allows developers to build notification clients using their own tool of choice
**Although I've built a Remote Event Receiver endpoint using nodeJS and wcf.js

"The Bad"- You can't register the webhook for a specific library event
I see this as a big disadvantage, I need only to receiver notification when a specific event occurs to the resource (the list) I don't really care about other events, I don't need to receive all these noise from SharePoint online.

"The Bad"- You need to keep your webhook alive
for some bizarre reason, webhooks will expire after a period of time (6 months) so your application need to update the registered subscriptions and extend the activation.

"The Ugly"-In short the notification message  is basically  useless
the notification message  consist of one or more of the following notification object
As you can easily tell, you can only reconstruct the resource object, you have absolutely no clue why you got this notification which enable the resource (document library) to send a lot of unnecessary noise to your notification client. In order to get the changes, you have to call /GetChanges library endpoint to understand why you received the notification object and decide wether to act or not.

To be quite frank, Microsoft Graph webhooks is done in a very neat way, the only draw back is it expires within 70 hours.

Tuesday, 18 April 2017

Add more Smarts to your bot:Detecting emotions from giphy posts

I've been blogging about bots since April, 2016 which was about the time I discovered Microsoft's amazing botframework. I've written a series of how to detect user intent based on text messages using LUIS (Language Understanding Intelligent Service) you can find it here.

In this post I'll talk about understanding user emotion using the Giphy posts embedded within Microsoft teams. This can be generalized to any image communication between you and the bot.

First Let's get the Image Content

Using Microsoft teams insert Giphy functionality we can see that it adds an attachment to the communication message between the user and the bot, this attachment is a mere link to the chosen Giphy.

Let's add the image understanding capability to our bot

We will use emotion detection service which is a part of cognitive service to detect the giphy emotion which is basically a POST request to emotion detection service with the subscription key added as a "Ocp-Apim-Subscription-Key"  header

         url: '',
                method: 'POST',
                headers: {
                    'Ocp-Apim-Subscription-Key': '*Add Your Subscription key here*',
                    'Content-Type': 'application/json'
                body: {
                    'url': 'your Giphy image URL'
                json: true
            }, function (errresponsebody) {
                if (err) {
                //successful call

the successful response will contain an object with score of each possible emotion, I've created a function that returns the highest emotion and then send a message to the user to reflect the emotion detection.
function getHighScoreEmotion(body) {
    var val=0;;
    var emotion;
    if (body.length > 0) {
        for (score in body[0].scores) {
        return emotion;
    return null;

and now how it's look like when you send your bot a giphy :)

Now our bot can understand and response to the Giphys shared by the user 

Wednesday, 22 February 2017

Botframework: Building a proactive Bot

In this post I'll walk you through a quick demo I prepare for MS ignite Australia 2017. Although it was the last demo and I didn't have the chance to actually make it work in front of a lot of live audience. I decided -what the hell- I'm going back home and I'll record this sh*t and I'll put it out there.

First thing you need to know, is to make the bot start a conversation or send a message proactively, you need to save the conversation address object which consist of the following:
  • Bot Object
    • Bot ID
    • Bot Name
  • User Object
    • User ID (base64 encoded value not email)
  • Service URL ( points to local host when running the bot using the emulator)

from the above it's hard to reconstruct the address object, so I start looking at a way of saving this address. There is couple of events triggered:
  1. conversationUpdate: When a user or bot start exchanging messages
  2. contactRelationUpdate: When a user add bot to his/her contact list

I see the second event is more convenient to tab into and to store the user address somewhere. 
Once you stored the user address you can either create a new conversation or send a new message within an existing conversation. This message can be triggered by any external event and can be published as another end-point of the bot service itself.
Note: you can easily construct the address if you are working with the botframework emulator

And this is how it looks like

Tuesday, 7 February 2017

SharePoint Framework: Multiple webpart instances within the same page Angular2

In August 2016, I've added a quick guide to how to build an angular2 webpart using the awesome new -back then- SharePoint framework
also it was basically to demonstrate what can be done in a context of a github issue

As Andrew Connell pointed out it's rather an angular limitation , which if we search for any work around , we can easily find that there is a workaround shared by Christoph Krautz here

Sounds easy right. However, trying out this workaround in the SPFx world isn't that straight forward. You will get an error as the first dependency of your AppModule  is not recognized by CompileMetadataResolver

My first thought was, how can I get the componentFactoryResolver without even passing it. I've used the _componentFactoryReolver member of the ApplicationRef object

Now I can create the Factory and update the selector to match the webpart ID.
My second problem is how to distinguish different webparts if I passed the selector to the module constructor it will have the same value for all the webparts on the page. which leads also to only single bootstrapped webpart.
I've added an id to the main component to use it in addition to the tagName as selector and used the description field to represent the id value
However, that didn't solve the problem as the value injected to the AppModule constructor was the same.

What to do next?? I ran out of ideas, not really I came up with a stupid one but it works  instead of the selector I've passed the Document object and in the consturctor I'll search all the elements that matches the webpart main component selector and voila!
it works !

DISCLAIMER :  this is a hack for experiment purpose only I'm no Angular2 expert I'm actually learning how to use this thing at the moment of writing these words.

the code can be found

Thursday, 5 January 2017

Inconvenient license verification in Office Store

A Licensing validation challenge

I have a single non-free add-in listed on the office store, I noticed that once the add-in trail is over it will still function as a full version. The Office store licensing framework won't remove the add-in from the user's available add-ins.

It's totally up to the add-in developer to limit the functionality of the add-in using store license verification endpoint.

The license token is issued and passed to the add-in as a query parameter ?et, you can easily get the license token which will be a base64 encoded in the case of Office Add-ins and URL encoded in outlook add-ins case.

Interestingly enough, if you are using outlook web app the license token parameter will always be an empty string which is a known issue as Humberto Lezama pointed out on this StackOverflow thread

after I dive into the code, I find out that the reason for this is the token is never retrieved only the the add-in manifest file is retrieved as the script mistaken the store type to be "Exchange" instead of "OMEX" (Office Marketplace Experience)

tracing back store type value, i find out that it has been set as hardcoded value in
as Exchange regardless of the source of the add-in

However, by correcting the value to omex I faced the below error

Build your own licensing model 

Instead of relying on the Office store licensing model you can list your add-in as a free add-in on the Office store and build your own licensing model.
building your own licensing framework is not an uncommon practice, one of the most popular apps on the store -Nintex Workflows for Office 365- is using a similar approach.