Search This Blog

Monday, June 12, 2017

IBM Watson knowledge studio for Cognitive modelling for different domains


We all know the NLU understands the basic language structure and can give you keywords and entities from a stream of string or text but what if you want custom entities and keywords from your domain. e.g Aviation or health care specific data and language analysis.


So a normal text can be fed to NLU OOB model and it will generate entities out of it but if you need to train your own model, you need to use IBM  knowledge studio and train the model.

watson knowledge studio train model

Lets get started.

First collect the old data in my case I am making a custom model for a service desk chat so I collected the old chats in form of txt files and imported inside the watson knowledge studio.

Go to Type system and create entities roles and relationships as shown below.






image


image

I wanted to show that person is located in a specific country and wanted to grab that as an entity so i created the above mentioned relationship.

Next step is to go to Documents section and creation annotation set

imageImport document set and create an annotator set.


annotator set create knowledge studio ibm


Now navigate to Human annotation and create a task and assign it to yourself and specify the annotation set as “chat” which we created earlier.

image


Now create the task and open the task and manually annotate all the chats as shown below.


image




image



Find out all your entities and map them with relations, I added single relation for demo purpose.


Do it for all 5 documents and submit all of them.


Once the human annotations are submitted, go to main page an accept the annotation.


image

It should be in completed status and now you need to select a machine learning model and input these annotations as ground of truth.

Go to Annotator component and create a new annotator component with machine learning.

image




image

Training, Test and Blind Sets are three different kind of sets that need to be populated with the minimum number of documents, else the model will not be trained and there will be errors on clicking the Train and Evaluate.


Else you will see following screen, model is getting trained to identify the person and location entity out of the text chats. Wait for 10 minutes and it should turn green.


ibm watson knowledge studio training model


Turned Green model is trained, push it to NLU from IBM WKS and see the results.

image


Lets test the model.

image


image\


Pre annotate processing running for the document set which we uploaded now and is not annotated, lets see what model does to our non annotated documents.


image\\\



Now you can see the statistics for both training and test data.

image



Export your model to natural language understanding so you can you can make your api calls from nodeJS or javasdk to get custom entities or relationships from a stream of text which you will send to IBM NLU.

Lets see how to do that:


Create a version of the model and click on deploy.

image

Select region space and deploy.


nlu deploy started for the model from wks


image


image



Open postman and fire CURL


image

Put your credentials for nlu and your model id and fire the command.

Make sure you remove new lines while you copy from web, and make the authorization bas basic and enter username and password there.

Final call will look like this.


image


Success !

I created a custom entity called PERSON which is recognized for the text “my” and is sent back by my custom model.


So the actual story starts here, collect your old data for different domains like for Aviation collect the flight data records or for health and science collect the clinical results and annotate them or ask your SMEs to annotate the text for you and fed in the custom model.


I hope I could explain the prototype of “

How to make custom language models in Watson knowledge studio and how to annotate and export them


Keep reading !

No comments:

Post a Comment