This blog is about SharePoint, Office365 and Office Development

Popular Posts

It's a wrap: Office Development Bootcamp 2018


For the second year in a row I had the pleasure to organise Office Development bootcamp along with my good fellow MVPs Ashish Trivedi & John Liu. This year was a bit special for me as we didn't only deliver the usual three bootcamps we had last year (Sydney, Melbourne and Auckland) but we have added three additional cities to the mix:
  • Brisbane: Organised by my fellow MVP @ChrisGecks 
  • Hong Kong: my first attempt to remotely organise an event and It was awesome turn out of 66 people , great job by Microsoft HK
  • Kuala Lumpur: second attempt to organise something remotely and just very close to the perfection of HK event
I personally can't wait for next year's event and potentially add at least three more cities.

Yo Teams: Running local https server


I've been using Teams Yeoman generator for quite a long time , also I've one contribution to this awesome opensource project.
However, I've always wondered why it's running on local http while the manifest is requiring the tabs endpoint to be an https endpoint, so If you are building a Microsoft Teams tab, you won't be able to run it locally without enabling https to your local server.
The method I used to get around this is using ngrok and use the https endpoint, but deep inside I didn't want to expose my local tab code to run externally, maybe someone out there is trying out all the possible ~4.2 billion possible sub-domains 16^8 (nah just kidding). I think I was just determined to run tabs locally using https server.

So I've updated my fork with the latest updates since my last contribution (almost a year ago), then did the following steps to make the generator creates https local server
  1. I've create new local branch and called it https (very creative name !)
  2. I've noticed I need to generate certificate and private key using openssl command (you can either install openssl.exe win 32/64 binary or run openssl through Ubuntu on windows 10  if you have a windows machine
  3. I've placed the two files in a cert folder under the main app template which is common across all Yo Teams artefacts
  4. I've used a webpack plugin "copy-webpack-plugin" to allow me to copy the cert folder
  5. To make sure that webpack will spit out my two files I've added the following line to the webpack server entry under plugins, don't forget to install copy-webpack-plugin package 

  6. Now to the fun and easy part, which is changing the server.ts class, first change import * as http from http to "https" and then replace the server creation with the following lines
  7. to run this version of the generator , run npm link
  8. now link your folder with the new generator, run it and when you type gulp serve, an https server will be created and you can use local server to run


Yo Teams: Azure App services Deployment error



It has been a while since I played around with generator-teams (Yeoman generator for Microsoft Teams), it was almost 6 months ago since I demoed the capabilities of this amazing open source project. This time I've created a quick project which includes a simple tab, my intention was to run it locally and also publish it to azure app services.

I was very straight forward process publishing the Teams tab to Azure app service using local Git repository and push my master branch to it. However, this time it wasn't the easy ride I expected.

I won't go through the abvious steps which is setting up your environment for nodejs development and installing the latest Yo Teams package (2.5 at the time of writing this post)
long story short, I created a new teams app with only a simple tab and created a new azure app service and added local git as a deployment option so I can push my code to it and achieve a very simple deployment to azure app services.

After the awesome generator created my artefacts, I've run local npm install and gulp build and it was rocking, everything was working fine locally. I initialised my git rep, added azure git as a remote (I called it azure very creative !), then I've pushed my master branch to azure. I was waiting for the magic successful deployment message but instead I get the following error


apparently by default the node and npm version used by app services are v0.10.40 and v1.4.28 which are relatively old and caused some errors in npm.

Using App settings you can set the nodejs version for the app service instance but I couldn't find a switch for npm which was the one actually causing the error above, so I decided to use another way to specify the node & npm versions by adding them to package.json file as below:

made a minor update and pushed the new version to azure app service local git repo, and yet I stumble upon another error


I created a new folder called dist under wwwroot so the script should be able to create the issnode.yml file , I thought that's it however I find another error this time in the gulpfile.js syntax.


so the reason this time is node version is used to run the scripts is the same old version so I had to add a new line to deploy.cmd just before the build command line.
and finally, my deployment is successful



and what supposed to be a one minute job turns out to be half an hour job.

SPFx: How OTB News Webpart displays ViewCount

On

In this blog I'll explain how the out of the box news webpart displays the view count for each promoted site page aka "News Pages". At the first glance when a colleague asked me I answered naively by
"Oh Check ViewCountLifeTime managed property" thinking -silly me- that modern pages webpart will somehow uses the same technique that old publishing pages used to embrace, which has viewCountLifeTime, viewsLastNDays  managed properties which used to give us a lot of options to choose precisely what we want to display.

Back then we weren't worried about where SharePoint stores these values as it will all be enriched via search pipeline, which was easy, awesome and just works.

When my colleague tried out the search with some of the classical ViewCount managed properties, they got nothing which left me scratching my head and honestly questioning my sanity. I was a bit away from hands-on role and by a bit I mean it was almost more than four years since I've been rolling my sleeve and coding stuff as I'm currently doing mainly high-level activities. With no further do I decided to look under the hood of of OTB news webpart

First thing I noticed after viewing the bundled code using pretty format feature in google chrome in the file sp-news-webpart.bundle_en-us_someguid.js when the component is about to mount a function called "updateRealNewsItems" is called, this method takes a set of items (which is the news items with the view count ) as a parameter.



With a bit of more digging by searching the "viewCounts" I manage to find an expression that verified whether the view count is needed or not which depends on the display template that you choose.  If the view count is needed firstly the code tries to update view count dataprovider information then calls refresh data.


The code continues with preparing a request , getting the view count , cache it so next time it will try to get it from the cache then add the view count to the news items which is returned by search.

That kept me wondering is it that hard to enable search pipeline to update the ViewsCount to accommodate the Site Page content so we can rely on viewCount managed property. I really have no answer to that maybe it is. Additionally I believe that if modern webparts are available on GitHub it will make SharePoint developer life easier.

Just my $0.02

SPFx: A Facebook Feed webpart with custom UI

On

So , this post might look trivial or pretty straight forward. However, it's not about how complex it could be from SPFx point of view I hate to break it to you guys It's really simple.

If you would like to aggregate social posts from various platforms in one single view using your own UI elements and design this post could be useful. Otherwise, it will be a complete waste of time :). So yes you'd better embed your Facebook feed via an iframe if you don't want to customize the UI or aggregate multiple social posts in one view.

Before we start you need to have the following artefacts:
  • A  Facebook App (go to http://developer.facebook.com ) and follow simple steps to have your app created
  • A Facebook page for testing purpose / or use an existing page that you have access to.

To play around with Facebook graph API and take a look of how the feed JSON object is structured; you can navigate to graph explorer tool at https://developers.facebook.com/tools/explorer and try it out. The endpoint we are after is very simple which is a GET request to v3.1/{yourPageID}/feed

When you start playing around with the tool you will find out that there are two main parameters:
  1. Limit (which limits the number of posts) it will be added by default for you with value 10
  2. Fields (which dictates which fields you want to retrieve) if you didn't supply the fields you will have the default results set returned which is as below:
"data": [
    {
      "created_time": "2018-07-23T04:49:18+0000",
      "message": "Sydney Global Office 365 Developer Bootcamp 2018",
    },

  1. You will need the access token which is a bearer token passed part of the header of the request, for the purpose of this application you might choose one of two options:
    1. Use an application token --> this will require review of your app , see below what you get if you tried your app token
{
  "error": {
    "message": "(#10) To use 'Page Public Content Access', your use of this endpoint must be reviewed and approved by Facebook. To submit this 'Page Public Content Access' feature for review please read our documentation on reviewable features: https://developers.facebook.com/docs/apps/review.",
    "type": "OAuthException",
    "code": 10,
    "fbtrace_id": "
Hl1PW01GfO1
"
  }
}

  1. Use a page access token --> which will need you as an admin of the page to grant the app access to the page posts

The drawback of this option is that you will have a very short-lived access token and Facebook is no longer provide token that is never expires (offline access token since 2012 yup I'm referencing something that is deprecated for more than 6 years ago).

But there is still hope you can somehow extend your access token using the method documented here which will allow you to convert your short lived token (about 2 hours) into a long lived token which expires in 60 days

From SharePoint perspective we are going to create a simple webpart that has three properties
  • Limit
  • Page Id
  • Access Token
The webpart is very simple and straight forward, I didn't try to include any fancy styles and it's a single react component called FbPostList which is consists of list of FbPost components


The main issue here is , Do I want the site owner to update the access token every 60 days, it's sounds very irritating and utterly stupid to be honest :) so I've built an azure function that runs every 30 days and it is responsible of updating the access token which I've stored in storage account, the SharePoint webpart now is responsible to read the access token that is stored in storage table. also make sure to store your Facebook app secret somewhere safe (Azure key vault)

Happy Sharepointing !

How MS Flow displays SharePoint Online Taxonomy field values


If you have Office 365 tenant and didn't play around with MS flow, I believe you're missing out on something big here. You can use Microsoft flow to basically do everything, yes I said it "everything" I might be exaggerating but I know a friend who basically almost doing everything using MS Flow. If you are in doubt just follow @johnnliu  on twitter and see for yourselves.

And it's free, yes for only 2000 runs per month but for a productivity tool , who want more than that :) and  If you are looking to do some integration work, I suggest you go for Azure Logic apps , which is the underlying platform of Microsoft Flow. if you want to abuse Microsoft flow you can add more runs as cheap as 8 cents per additional 100 runs.

let's dive through the topic of this blog, in order to help you review everything in action, we will start by creating a new flow, for simplicity we will use HTTP trigger so we can trigger the flow using simple GET request.



The action we will be performing here is to update SharePoint file properties, to make this work you will have to connect to existing SharePoint online library that has been enriched with a taxonomy field as an attribute of a custom content type driven from the parent "Document" content type.


The main question I had here is how flow is retrieving the classification values because I wanted to select the value based on a specific parameter and I want to piggyback on what MS flow already knows.
If you trace the XHR requests in the browser developer console, you will be able to find mulitple request to specific endpoint similar to the following URL:

https://europe-001.azure-apim.net/apim/sharepointonline/shared-sharepointonl-2e00594a-1a29-41fe-91f9-7fd348d1fe86/datasets/https%253A%252F%252FYourSharePointSubdomain.SharePoint.com%252F/tables/49b1fe2f-228d-427e-bfe7-7d3ade765a83

This request will allow MS flow to load all the selected document library properties and that include all the lookup, taxonomy fields and many other properties



One particular property is the URL to the taxonomy field values which is the same url with /entities/{some_guid} added to it as a suffix.


If we trace the request we will be able to extract the Authorization Bearer header which contains typical JWT token properties including:
Audience                    https://service.flow.microsoft.com/
Issuing Authority       https://sts.windwos.net/your_tenant_Id
AppId                         6204c1d1-4712-4c46-a7d9-3ed63d992682

If we navigate to Azure portal and search for this app in the enterprise apps under Azure AD we will find Microsoft Flow Portal app which strangely enough have no permissions visible through azure portal.

And that's how Microsoft flow display your SharePoint taxonomy fields values within SharePoint update file properties action.


Thoughts on PowerBI anonymous sharing link


So today we will explore in a very brief post how Power Bi generates the anonymous sharing link, It all started when I created an anonymous sharing link which looked like the following:
https://app.powerbi.com/view?r={some key}

so the part I was curious about is the generated key which looks like a base64 encoded string, I decoded the value and I get this JSON string representation

Just from this JSON string representation, I can somehow understand two values which are the GUID values, k is referring to "key" which is the sharing key of the report which I discovered to be unique per report. The most obvious value was the "t" value which refers to your tenant. to be even more precise it's the azure active directory "Directory ID" value. you can always double check by logging into your azure portal and check your azure Active Directory ID in the properties page



the "c" value doesn't really resonate with anything I thought it's somehow refers to the report Id. although one might argue that the key GUID is enough as it will uniquely identify the report. However, I tried to change it encode the object and open it in the browser. Interestingly enough I had my report opened. Firstly, I thought the browser might cache a certain value so It tried a completely different browser and it worked as well!

What I did next was even more interesting so I removed the whole "c" property and guess what it also worked, so I have no clue what this "c" value refers to!

one interesting thought will be whether the sharing key is enough for security reasons, and given that it's the GUID is globally unique as it uses device Id and timestamp in addition to some other things , (algorithm, uniquifier) for more information regarding GUID structure you can refer to this blog post

That's bring us to the end of this brief post.

Auto classify Documents in SharePoint using Azure Machine learning Studio: Part 2


This is part 2 of a two parts blog series which explains briefly how to use azure machine learning to auto classify SharePoint documents. In part one, we covered the end to end solution skeleton, which relies on using Microsoft flow. The flow is set to be triggered whenever a new document is uploaded to our target SharePoint library.

The main challenge we faced is how to extract text representation from the Microsoft office file .docx and as explained in the previous blog post. I end up using the .NET version of Open Source Tika to extract the text. In the previous blog post we referred to the Azure Machine learning as web service we call using flow HTTP invoke action. Today we will explore this black box in more details:
  1. The Data 

    This is by far the most important and complicated step of the whole process as it's too specific to the problem you're trying to solve. Also the availability of data that is sufficient to train your model with high probability of correctness is something you will spend most of the time trying to figure out.
    In my example as it's merely a POC, I did choose an easy path. I used an existing dataset available to everyone has access to Azure ML studio (BBCNews Dataset) and then I tailored the SharePoint content to match the data set.
  2. The Training Experiment

    Create your first Azure ML training experiment is a relatively easy task compared to data preparation phase.
    First step is to navigate to https://studio.azureml.net/ and login using your Microsoft Account, you don't need to have an active Microsoft Azure subscription or credit card. LUIS and Bot framework were also free but now you need to have Azure subscription to use these two other services, don't know whether this will change in the future. However, till the time of writing this blog post it's still free.
    From the left menu choose experiments and choose new , choose blank experiment as we will build this together from scratch

    Let's name our experiment (our shiny new experiment) and we will use BBC news data set, you can substitute this with your own prepared set which will have the text representation of Office document with the current category

    Then we will do some data cleansing, select News and category from the data set and clean up empty rows by removing any rows that does have any missing columns by setting minimum missing values range 0 to 1 ( that means if only single column has a missing value the action "cleaning mode" will be triggered , we will choose to remove the entire row

    Now it's getting more exciting, we will use some text analysis technique called Extract N-Gram features, this will use the News column (text representation of the document) as an input and based on the repetition of a single word or tuple of words it can analyse the tuple effect in the categorization.
    First thing we will change the default column selection from any string column to only analyse the News column which represents the text extraction from Office document.

    Secondly,  we will choose to create a vocabulary mode, the result vocabulary can be used later on in the predictive experiment. the next option is the N-Gram size which dictate to what extend you want the tuple to grow. for example if you keep the default 1 it will only consider a single word. However, if you choose 2 it will consider single word and any couple of consecutive words. In our example we will use three that means the text analysis will consider a single word, two words and three words tuples.
    In the weighting function there are multiple options, I will choose TF-IDF which means term frequency and inverse document frequency.

    This technique is give more weight for terms that appears more than others with the consideration of giving a negative score for terms that appears in different documents( news items in our case) with different classification. The final score for the vocabulary is a mixture of both TF and IDF values.

    There are lots of other options which can be viewed in more details in this excellent guide for N-Gram module guide here

    One important option is the desired output features which is basically how many tuples you want to use to categorize your data, this might be a trial and error for a newbie like me until I can see the top n effective tuples to make it easier and faster for the trained model to compare future text against, in my scenario I used 5000 Features as a result of N-Gram Feature extraction step.

    After the data preparation and vocabulary preparation , we will use 4 steps that is common across any training experiment which are (Split, Train, Score and Evaluate)

    The first Step is to split the data randomly with some condition or purely based on a random seed, train the model using part of the data then score the model using the other portion, last step is to evaluate the model and view  how accurate your model is.


    Based on the evaluation result you can see overall model accuracy, at this point if the model is not hitting the mark ( you can set target accuracy based on business requirements). One possible solution is to change N-Gram tuple values, output features or even use a complete different training algorithms. Sometimes it's useful to to include additional columns or metadata to help categorize the documents maybe a department name or author not only the document content.
    In my sample I got overall accuracy of 80% which is accepted to me so relying on the text extraction from the documents is sufficient to me.

    Now let's run the experiment and confirm that all steps are executed successfully which we can validate with the green check mark on every step

  3. The Predictive Experiment

    After a successful run of the training experiment we can easily hover over setup a web-service in this case we can have retaining web service which will allow use to provide Dataset as an input and spit out the trained model and evaluation results as an output. for the sake of this blog post we will not play around with this option. We will instead generate a predictive experiment which will allow us to predict item classification based on its text representation. So using the same button in the lower toolbar we will choose to create predictive web service this will generate the predictive experiment for us if we run this and publish it, we will cause plenty of errors as the generated steps trying to create new vocabulary too.
    Let's use the generated vocabulary of the training experiment as an input (we need to save the result vocabulary of the training experiment as Data-set so we ca use it later)


    We will also remove any transformation steps , we will only select single column (news text representation) which will be the single input for our web service.

  4. The Webservice
    Let's run the predictive experiment now and make sure it's working, then we can deploy the web-service which we can use to classify text representation of documents. This will open a new page which allows us to test our web service by supplying text as "News" input and provide scored Label output as text.

    P.S. you must pass the API key as authentication header to make it work ;)




Auto Classify Documents in SharePoint using Azure Machine Learning Studio Part 1



I was trying to figure out a way to get the text representation of documents stored either in OneDrive or SharePoint Online to execute some text analysis techniques using Azure machine learning studio. I know for sure (not really just guessing) that SharePoint search index store text representation of the document but I guess this version of the document is not exposed to us


I also happen to know (this one is not a guess) from the good old days that SharePoint search uses IFilter to get file content as text then store this text in the index. I tried to do it in a different way.
So I figured how about doing this document to text conversion myself, I found Tika text extraction library  handy apache open source tool which has been ported as .NET nuget package 

I've create a simple Azure function using Visual studio, It has been a while since I used the full fledged Visual studio as I've been using mostly Visual studio code lately, as you guys can see the azure function is pretty straight forward just 4 lines of code to convert the docx files to text representation so we can use any text analysis techniques on our SharePoint documents.

Now let's hook this to a simple Flow which been triggered when a new file been uploaded to specific SharePoint library.
The flow will start then it will trigger the azure function which will extract the text representation of the office document and send it to a web-service to do some text analysis and return the document classification value.



Then within the flow itself we can update the SharePoint document and update the classification as per the text analysis result.

Hint: We will consider this web service call used in this flow as HTTP2 as a black box for now. to give you a sneak peak It's based on multi-class neural network classification algorithm built using Azure Machine Learning Studio and we will discuss this particular building block in more details in part 2 of this series.

now let's upload a new word document that have a text represents a business article and let's see the updated category text value

Here we go , our smart document categorization flow is able to classify the document as business document.
In the next part of this blog series, we will discuss the azure machine learning studio experiment in more details.

SharePoint Online: Unexpected behavior while trying to demote a news page


In my previous blog post I explained in details how the new communication site differentiate between normal Site page and promoted "News" one. it's all about a newly introduced attribute of the SitePage content type called PromotedState.

It's fairly easy to promote a normal site page using the UI using a simple button , but once you promote the page there is no button to return the page to a normal site page changing its promotedState back to "1".

In this post, I'll walk you through an unexpected behavior when we try to update the value of the PromotedState using CSOM to its original value of "1" to hide the page from the OTB news webpart.

I'm going to use exact code sample I've published before with minimal modifications, you can refer to my previous blog post published almost a year and half ago.

Please note: for the modern experience old fashioned custom actions for the ECB context can not be deployed actually you will get an error message if you try to deploy an EditorControlBlock item based custom action

For simplicity and because I just want to show the unexpected behavior when we manually update the promotedState from "2" to "1", I'll create a new JavaScript function to update the PromotedState and I'll call it demoteNewsItem which sets the Promoted State back to "1"

all other helper functions will remain intact, I'll use them to acquire the token and get both digest and e-tag so I'll be able to update the SharePoint item.

I'm going to call the demote item function from my test.js file which I usually use for debugging purpose, I'm just trying to find a quick and easy way to update the site page "PromotedState" attribute.

If we navigate back to the site we can find that the promote button is enabled however it has no effect on the item once you decided to update PromotedState using the previous mechanism when I inspected the PromoteToNews item action (mentioned in details in the previous blog post) I can still see the result is true which gives you the feeling that the process has been successfully executed!

A much better and user-friendly approach (which we will explore in our next blog post) is to create a new SPFx CommandSet for the site pages library. However having the page stuck on demoted state is still an issue as you can see below:



Promote a Page in SharePoint Communication site



I was playing with the new communication site lately and I was wondering what differs a news page from a normal site page. All of the pages resides under the same site pages library. The first though entered my head was maybe it's a different content type.

By running a simple PowerShell script I find out that they are both using the same Content Type (SitePage) with ID



I was actually shocked as it was an old practice of mine to create different content type for different content elements also separate layout pages to dictate the rendering of these various intranet contents.
What I see now is an editing based experience. You can have various layouts per single content type as you can have a news page with single column layout or two-column layout that enables you to create a specific experience per item not per content type which is very flexible yet more time consuming from and editing experience and requires more governance to make sure that not every news item will have a completely different experience.

The last drawback can be handled by copying an existing post instead of starting from scratch but again that doesn't stop the super user from messing up the copy.

I know I've got side tracked as usual , back to the main topic of the post, what I did next is that I used my poor PowerShell skills to extract all the attributes of the pages then I compared both using winmerge (pretty old tool right)



I find out that there is a property called PromotedState which is 0 in the case of normal SitePage and 2 in the news Page.

If you navigate to a normal site Page you will find an option to promote a page which gives you an option to publish it as news post (eventually change the PromoteState from 0 to 2), you can only see the promote button if the page is published

to figure out what really happens behind the scene I had to look for all the xhr request in the browser console and I find out that when you click promote button it sends a POST request to https://yourtenant.sharepoint.com/sites/sitename/_api/sitepages/pages(10)/PromoteToNews

and the response returns as below indicating a successful promotion of the page

Then the page PromoteState will be updated to 2 and will appear in the Out of the box news webpart