This blog is about SharePoint, Office365 and Office Development

Popular Posts

Retro: Organising a Multi-city Developer Bootcamp




So, last year was a very special year for me, exactly a year ago I've been thrilled to share with more than 11 MVPs across APAC region the honor of hosting 6 Office Developer Bootcamp, what made last year special is that we continuously kept raising the bar.  Starting in 2017 with the first Global Office 365 Developer Bootcamp we had only three cities: Sydney, Melbourne and Auckland for the first year. For 2018 we decided to increase the count by 100% adding Brisbane, Hong Kong and Kuala Lumpur to the mix.  You might wonder why HK & KL, I will let you guys know a bit later.

I thought - with a little nudge from Shiva to share some of the lesson I've learnt in organizing Office 365 Developer Bootcamp across such diverse cities/communities, it goes as below in no particular order:
1.       Pull an awesome team together:  Having Ashish, Cameron, John, Paul and Chris and many more to support this event was a key to success in both 2017 & 2018.
2.       Plan your trip right: if you decided to do a lot of travelling, you got to plan it right, I remember booking a multi-city trip from Sydney->Auckland->Brisbane then back to Sydney. It helps with the budget and kept the MRs less annoyed.
3.       Remote Event Planning is stressful:  I think we could all agree that it's easier to run an event in your city, for last year I believe the toughest event of all was Kuala Lumpur. The main reason that I had no co-speakers and I had to arrive to Microsoft Malaysia very early in the morning having flew in the night before (the people who knows me very well I'm not a morning person and I can't function properly before 10 am!). I also had to change the room setup to match the desired setup (classroom). I still feel bad for the attendees as I ran the whole event from 8am to 3pm.
This year I'm fortunate to have a whole team in Kuala Lumpur to support the event and almost 200 registrations for the event, I even decided not to travel and do a remote session, that would keep someone very happy!
4.       Keep a dynamic Event Format   This is something I've done completely wrong in the first year, what might be your typical event in one place of the world is completely different in another place as the attendees’ expectation might be completely different. You will have to tailor your content to fit the location, having a local team will definitely point you to the right direction
5.        Do your best to understand the culture differences for almost every city, Friday was our first choice for the event day as we discovered that a weekend full day training events usually have a very high dropout rate. 
That wasn't the case for Kuala Lumpur, although I'm a Muslim with two Mohammeds in my name, I completely forgot that Malaysia is a Muslim majority country and they either have Friday off or half working day, I had to move the date to Tuesday and change my hotel and tickets.


So why Kuala Lumpur and Hong Kong, the reason is very simple, it was easier for me to travel to these cities visa free as I still hold an Egyptian passport!

 I'll leave you guys with some photos from the events





Protecting your your WebAPI using Azure AD


In this post, I will discuss yet another useful feature you can use Azure AD specially if you are building something that uses your Microsoft 365 Identity platform.
let's assume you're building an app that uses some of Microsoft 365 capability and integrates with it via the Graph API also you have your own custom APIs that connects to your custom application, you want to expose your custom application functionality via REST APIs to your app. The trick here is you want to protect your custom build APIs and hopefully manage to have a consistent experience.

In the past I've used IdentityServer to provide this functionality when I used to build fully custom solutions, but for this time I was thinking I was already using Azure AD to connect to MS Graph APIs, what if I can use it to protect my own custom endpoint.

The answer is pretty straight forward, you can easily protect your custom built API using Azure AD or event Azure B2C if you are  building a consumer type app.

  1. You'll need to create an Azure App registration by navigating to your portal.azure.com then going into Azure AD and create a new app.
  2. In the process of creating your new app you can choose whether it's for a single tenant vs. Multi-Tenant or it could be accessed via a consumer account (Microsoft Personal Accounts), in this case for simplicity I'll choose an single tenant option.
  3. After you create the new app, you will find an option called (Expose an API)
  4. In this screen you can define your API scopes and also ensure that your client application which you already have been using to access Microsoft Graph or any other Microsoft cloud endpoints is added as authorised client application.
  5. If you have done all of that you have completed the configuration part, now what you need to do to make your API protected by Azure AD is simple and very well explained in this Github repo https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2
Now your API will be protected by Azure AD, the other challenging part is to figure out scopes/Roles to provide a more mature endpoint authorization, as I mentioned before you can define scopes for your API in Azure AD and then you can use these scopes to protect either a whole controller or a specific action using the AuthorizeForScopes attribute


A very easy straight forward approach I found is to define Application Roles within Azure AD app registration and use Authorize Attribute with Roles. you can add users to specific application role

using the "Enterprise Applications" section of Azure AD and it could be either a direct assignment or your can assign a security group to a role if you have Azure AD P1 or P2 subscription.

Setting up Microsoft Graph Security API Sample


Following up on my previous post which was a very quick intro to building security apps using Microsoft Graph, to get things up and running there is no easier way than finding an app built by someone else that demonstrate some use cases and see it for yourself. a very good start is https://github.com/Microsoft/securitydev which has a sample app that displays your organisation score and list alerts and actions of these alerts.

First let's discuss the components of this sample:
  • An Angular SPA: represents the front end and it does trigger the authentication flow for the user 
  • A set of APIs:  connects to MS Graph security endpoints to collect alert, action and secure score data
  • notification end point: to set up remote endpoint for MS Graph webhook subscriptions and a SignalR enabled web page to display notifications in an interactive manner.

What you need to be able to run this sample?
  1. Azure AD application, can be easily registered following the guide https://docs.microsoft.com/en-us/graph/auth-register-app-v2 please note that the app need to have permission to MS Graph security endpoints as application permission as the security information (alerts, actions and secure store) is accessed by the API endpoint not as user identity
  2. need to replace the client ID and client secret in both appSettings.json and environments.ts files 
  3. If you are running this app locally (development environment) you can either run ng build  manually or preferably added to your visual studio build pipeline, (note if you don't have angular-cli installed you need to install it by simply
  4. Now you can run the app and launch it on the browser, you will be prompted to login in using your Azure AD credentials and you can have a go with the app various pages, there is a security dashboard, alerts, actions, subscriptions and secure scores

Introduction to Microsoft Graph Security APIs


I don't remember being so excited about something in the past three months since Liverpool won the Champions League Final apart from the announcement of the Microsoft Graph Security APIs, I'm still trying to figure out what is the potential of the APIs but I think having this APIs would open up possibilities to ISVs and independent developer and partners to start simplifying the way the admins/users deal with security alerts and more importantly streamline the alert process across different providers, whether it comes from Microsoft 365 security centre or cloud app security (Azure) or even via a Microsoft vendor/Partners.

I decided to take the APIs for a spin and play around with what they currently offer both in GA (v1.0) and beta, I'm not going to go full-blown approach, so I'll just use the Microsoft Graph explorer to play around with these endpoints, steps are pretty simple
  • Navigate to Graph explorer https://developer.microsoft.com/en-us/graph/graph-explorer and login
  • Make sure you edit permissions and add at least SecurityEvents.Read.All , this will prompt you to re-login and consent to the newly added scopes of "Graph Explorer" Azure AD multi-tenant app.
  • The browser will redirect you back to the graph explorer
  • In the URL textbox type the endpoint under /v1.0/security/alerts , you will get a list of aggregated alerts like the below
    • Unique identifier and also highlight the azure Tenant and subscription, if it's an alert generated by Office 365 security centre 
    • Set of tags based on the configuration of the source system
    • Vendor information
    • User information
    • Severity of the alert (as configured by the originating source)
now let's create a new custom alert policy and see for ourselves how long it will take till the graph security API pick it up
  • Login to your M365 admin portal and click on security, you will either land on protection.office.com or security.microsoft.com based on your subscription  for example if you have a E3 developer Office 365 account you won't be able to use CloudApp security or even add it to your subscription and you will always get redirected to protection.office.com
  • For simplicity we will choose Office 365 alerts if we have landed on security.microsoft.com by clicking on policies then Office 365 alerts 
  • For some other bizarre reason even if you been redirected to protection.office.com via clicking on office 365 alerts you have to choose alerts and alert policies from the left side navigation !

  • Now let's create a new alert policy as below

  • Now I'll navigate to SharePoint site and share it with an external user 
  • after almost a minute or so I've got email notification that the site has been shared , it took longer for Microsoft Graph API to get the alert but not sure about the actual time limit from the alert origination till the aggregation of all alerts
At the end, I know it's a very simple endpoint but the value that this endpoints represents is priceless as it allows developers to enable cross-product scenarios using same code-base through different use cases like security management, threat detection and  information protection

Microsoft Graph: displaying user & contact images


In this post I'll explain how to get user profile out of Azure AD and render it using a simple React component, if you are not familiar with Microsoft Graph, you can can start getting more familiar with Microsoft Graph using https://docs.microsoft.com/en-us/graph/ as starting point and maybe use graph explorer to test some of these endpoints.

The main endpoint we are going to use here is me/people which lists the people that is relevant to you. a sample of what this endpoint returned by default is shown below


If we take a quick look at what me/people endpoint returns , it's not only user information  it's a collection of Persons and Groups and within Person includes "OrganizationUser" and Contacts.

It's easy to render any of these properties however, what we need to display is the user image, which is not retrieved using the People endpoint. for each item we have to create another call to the /user/{id}/photo/$value endpoint if it's an "organization user" otherwise we would use /me/contacts/id/photo/$value the response for this is either 404 if there is no image uploaded or it's a ReadableStream

The key is to convert what we have  which would be an object of type ReadableStream to a blob Url and then render it in an HTML image tag


another mechanism is to build a proxy API that stores the image in an online cache and request user-based images it's somehow similar to a mechanism I've used before in this 3 years old post http://www.sharepointtweaks.com/2016/01/officedev-the-new-intranet-loosely-coupled-approach.html


Modern Experience: optimizing the performance of your SPFx components


When building custom SharePoint online solution, there is not much you can do to optimise the server performance, your areas of improvement will be around the basic following items:

  1. Reduce content size (size of images, media you add to the SharePoint Pages)
  2. Reduce bundle size (SPFx solutions)
  3. Optimise calls to SharePoint APIs or any external APIs
  4. Network and Infrastructure optimisation
  5. Client device Optimisation
In this post I'll focus on #2, which  is the things you need to worry about when creating custom SPFx component whether it's an application customizer or single webparts.
  • Measure and monitor your bundle size
Before you try to introduce complexity to your work make sure that you have a problem you need to tackle first, over optimisation is a mistake we all have been guilty of. please revert to my previous post about measuring SharePoint online performance here

So, let's say that you have identified that you have a problem, and this problem is potentially due to a bigger bundle size, common symptom for that is a very high JavaScript Evaluation time  the first step is to try to locate which component is causing your bundle size to balloon up which is pretty straight forward process, one way of doing it is to use https://www.npmjs.com/package/webpack-bundle-analyzer which gives you a nice graphical representation and information for each component size (raw, minified and compressed)


  • Move common dependencies to be "externals"
If you're using a common component or library indifferent webparts, consider moving this into externals so it won't be packaged for every single webpart JS file

  • Don't just use libraries, simple solution might perform better
For instance , imagine you want to display a spinner, and because you are a good SharePoint Developer, you'd love to use the office-ui-fabric-react controls and import the spinner and use it while your webpart fetches whatever content you're after. sounds like a good idea, think again just importing the Spinner introduce almost 1.6M of Javascript code which can be substituted by a simple load of gif. Don't get me wrong if you want to use the react controls package use it but you need to make sure that you use it for more than just a spinner and I'd rather move them to externals and serve it as a compressed (Gziped) size ~ around 200KB
Add caption
  • Reduce font files to the bare minimum
this one is a very handy one, let's say that you're using font-awesome to display some nice looking icons, including the full font files in your bundle adds more than 900KB of payload to your application and to be honest it's not about hosting fonts as externals or not. it's about do you really need the whole font components or you just need like 4 or 5 icons.

there is a full post on this topic here https://blog.webjeda.com/optimize-fontawesome/ and it helped me reduce my font files to be in total ~ 19KB instead of 931KB


  • Use another CDN solution that allows compression 
Office 365 public CDN is good but it's not good enough as it doesn't provide compression, you could easily use Azure CDN endpoint and enable compression, you shouldn't pay much as you are not really delivering heavy weight static contents 


After all of these modifications which doesn't really contribute to a lot of refactoring , I've managed to get a 90/100 score for the SharePoint modern site, which was a huge performance boost






SharePoint Online: Measuring SharePoint Modern Experience Performance

On

In this post I'll try to share my experience on how to measure performance of SharePoint Online modern experience and most importantly what you should expect and communicate to your client. Specially, a client that is moving form a fully branded customised on-premises intranet to SharePoint Online Modern Experience.

First, you need to identify what is the metrics you will use to assess the intranet performance and for this post I'll try not to get sidetracked and talk about accessibility and other non-functional aspects of your intranet platform.

There are many metrics that could be used but as we are putting a SaaS platform to the test I will ignore any server related performance metrics as we can't optimise server performance by any means,.
Of course you can always check the x-sharepointhealthscore custom header value and contact Microsoft support if you are not happy with your tenant performance, but for this post I'll focus on client things that is mostly affected by your client machine and browser.


There are many tools that you can use to measure the performance of a website, however if you are using Chrome browser (please don't tell me that you are using edge or worse internet explorer , even Microsoft has given up on them). by launching the developer tool you can easily launch an audit tool which allows you to perform a full audit for the current website. The tool is called lighthouse, that why I decided to stick with the metric which google use to score a website performance:

  • Time to first Contentful Paint
  • Time to first Meaningful Paint
  • Time to interactive 
  • Time to CPU Idle
  • Speed Index
  • Estimated Input Latency


That's good , we can easily execute the audit but how can we automate this? the answer is very simple. the good lads at google has built us a CLI for lighthouse than can be installed using npm , please check the GoogleChrome lighthouse repo for more information on how to install and run it.

In short you will be able to run the following command line which only execute a performance audit

As you can see the script is very simple , I start with getting the URLs of the SharePoint Online pages I want to test from a CSV file which also have the required numbers of runs.

Then I use the lighthouse command flags to ensure that there is not throttling or emulation (--disable-device-emulation --throttling-method=provided) also I'm exporting the audit run output as JSON file in a specific output folder

I also passes the --disable-storage-reset switch to ensure that I can use the browser cache , another flag to mention is --only-categories=performance which only execute the performance related audits


When I started running the report I get an amazing results, however when I looked at the trace using the lighthouse reoprt viewer  I found out that I'm redirected to login page which doesn't have much hence the amazing 100/100 score.


I looked over ways how to pass my login credentials but I found an easy way which is running chrome in debug mode


this will launch a new chrome window , I'll navigate to the SharePoint Online URL and login using my user credentials so this browser session will have my user logged in already. Afterwards, I need to pass the resulted port number when to the lighthouse cli.

It's kinda lazy solution but it worked OK for me but it is definitely better to run chrome headless.

The results

After running the PowerShell script I have a number of JSON files and I want to get the average value for the above metrics, do I need to write another PowerShell script. hmm  I don't think so, I'm a very lazy person. and I have a mongo db instance installed on my laptop. I imported the files to a collection then run the following script to get the average results


To be honest the results for an empty OOTB team Google gave the OOTB team site a performance score of 86/100 which is a cool score but as you might see, it's all about JS execution time and main thread work.


If your client is after an intranet solution with speed index is less than 3 seconds, You might need to consider building your intranet as a loosely-coupled intranet which is a concept that we have discussed here more than 3 years ago, you can check more details in the following post:
http://www.sharepointtweaks.com/2016/01/officedev-the-new-intranet-loosely-coupled-approach.html

If you are happy with the current performance and want to customise SharePoint Online and start building SPFx extensions and webparts. You need to be very careful and very cautious of what you use as every bit of JavaScript will matter and at some point, you will have to tell your client yes I can do this put it will slow your site down.

In the next post I'll list some techniques that helped me from lifting up a 50/100 scored custom SharePoint Online intranet to be comparable as the OOTB  with a score of 86/100.

Till next time

SPFx: Modal Dialog, show classic SharePoint forms

On

Remember Model Dialog, that's surely brings back some memories , it was easy back in the days a simple way to instantiate a new SP.UI.ModalDialog  with the appropriate options and then show the dialog and that's it!

Of course we had to make sure that  we append the isDlg=true to the url querystring when showing a SharePoint form.

Also we used to make sure that header and footer are adhering to the branding guideline so when we customize the heck out of the SharePoint master page (prior to SharePoint 2013) we didn't get the funny header and footer ruining our Dialog box.

The other bit which was happening for us is when the SharePoint form (whether it's item display or edit) when we press cancel or OK buttons, the Modal dialog disappear magically.

So let's take SPFx, how can we replicate the same functionality with the simplest possible approach

BaseDialog & DialogContent to the rescure

BaseDialog is an abstract class wrapped and delivered to us as part of @microsoft/sp-dialog  package, by simply extending this dialog class and implement render method you can construct the content of your dialog (there are heaps of other method to extend and customise the behavior of the dialog but this post is about the simplest Iframe dialog ever)

In our case,  let's implement a new react component called IframeContent , which acts as a container for our iframe. This simple react component contains a single root DialogContent component which is imported from office-ui-fabric-react package. The IframeContent component has a single child element which is the iframe HTML tag.


The second component is even simpler , which is the Dialog itself which  implements the BaseDialog the most significant bit of code is the render method which basically does nothing apart from having a single instance of the previously created dialog content.


So far so good, but what if we need to replicate the magic we used to have (hiding the dialog upon clicking ok or cancel of an OTB classic SharePoint form).

The answer is simply using the same old mechanism , the old forms send an event to the parent window called "CloseDialog" so what we need to do is simple, let our React component listen to the event and call the close method.

Full code of the IframeDialog component is below
A more comprehensive implementation with different building blocks can be found at the following github repo https://github.com/SharePoint/sp-dev-fx-controls-react




SharePoint Online: What really happens when you click unfollow/follow site buttons



So, I'm back for the first post this year after quite a break, I can't believe it's 2019 already and Dubai 2020 Expo is only one year away. I don't live in Dubai anymore but I remember thinking of 2020 as the far future.
without further ado , let deep dive in this blog post topic:

What happens really when you unstar or star a SharePoint online site on the SharePoint home page . I presumed - naive me- that endpoint call to follow APIs is triggered, but as my naivety has been proven many times before, specially when I thought that modern news webpart using Search Analytics to display view count (turns out to get it from an endpoint  https://{your-region}.sphomep.svc.ms), read more about this here


Similarly, follow and unfollow website use similar endpoint.

Firstly, let's see when we unstar an already followed site what happened. A POST request will be fired as below


This request has the usual header information in addition to that a Bearer token which looks like the below after decoding the base64 and remove the signing bits at the end.

The function is used to update the followed site status is called sendSiteFollowingUpdateRequest, and takes three arguments, the first one is an object contains whether the site is followed or not, and the site card item information.


Next, let's try to understand how the aforementioned bearer token has been obtained, by looking at the session storage I can locate that the same token is saved as "ms-oil-datasource-SpHomeApiDataSource" in the session storage as below:


by going through the code I can see that it has been obtained by a simple POST request to the endpoint _api/SP.OAuth.Token/Acquire with the proper digest value.




maybe one day we will have full documentation for the sphome.svc.ms webservices and what kind of first party functionality is been exposed there.

these finding is only accurate at the time of the writing of this blog as these are not publicly available versioned APIs, use them at your own discretion and preferably not outside of a POV.


Ciao