Google Nature Language API

Here is a notes about the google nature language API summary I wrote  three years ago. some feature may change a bit, refer to here for latest:
Refer to here for basic concepts :



The Google Cloud Natural Language API provides natural language understanding technologies to developers, including:

  • sentiment analysis, – English
  • entity recognition, – English, Spanish, and Japanese – in fact is : nouns analysis
  • and syntax analysis – English, Spanish, and Japanese

API has three calls to do each, or can do them in one call, analyzeEntities, analyzeSentiment, annotateText.

sentiment analysis

return the value to denote the text emotion negative/positive related extent only now.
has the polarity and magnitude value as the result.

entity recognition

find out the “entity” in text – prominent named “things” such as famous individuals, landmarks, etc.
return with entities and there URL/wiki, etc.

syntactic analysis

two things returns from syntax:
1. return the sentences/subsentences of input text.
2. return the the tokens (words) and there meta in grammar syntax dependency tree.


Test Steps of commands ++++++++++++++++++++++++++++++++++

gcloud auth activate-service-account --key-file=/yourprojecytkeyfile.json

gcloud auth print-access-token

print-access-token, This will give you a token for following commands. I create three json files to test each feature:
so I can use these commands to try three API now:

curl -s -k -H "Content-Type: application/json" \
-H "Authorization: Bearer ya29.CjBWA2oWnup6dVvAlv6NTJyLsDtfqdCx70tX6_J0H7KFngd1ual2Osd8gCpcc" \ \
-d @entity-request.json

curl -s -k -H "Content-Type: application/json" \
-H "Authorization: Bearer ya29.CjBWA2op6dVv_T7nAlv6NTJyLsDtfqdCx70tX6_J0H7KFngd1ual2Osd8gCpcc" \ \
-d @syntactic-request.json

curl -s -k -H "Content-Type: application/json" \
-H "Authorization: Bearer ya29.CjBWA2oWnup6_T7nAlv6NTJyLsDtfqdCx70tX6_J0H7KFngd1ual2Osd8gCpcc" \ \
-d @3in1-request.json

About how to create these json file as input, please refer to google SDK doc.




Spoken dialogue system open resources

OpenDial – java

DeepPavlov – Python

Jindigo – Java

jVoiceXML – JAVA

CMU RavenClaw – C++/Perl

PED – prolog

OwlSpeak – Java

IrisTK – java

InproTK – java python

Rivr – Java – voiceXML


summary ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Cloud Commercial like the FB, Microsoft LUIS, Nuance and google

Speech Recognition and Speech Synthesis open resources

CMU-Sphinx  C/C++/JAVA












HTK Toolkit:





Speaker Recognition and Diarization open resources

Speaker Recognition


SIDEKIT  – python
MSR Identity Toolbox – matlab
Kaldi – scripting



GMM-UBM i-Vector


Speaker Diarization


kaldi CALLHOME_diarization – scripting

Pyannote – python

aalto speech – python for segment




Text structure extract from PDF brainstorming

1. Use existing tools like grobid.

Use machine learning to get scientific paper structure data.  It has demo page at here, . My tests show that it can extract title and some other data, but still content maybe mixed with footer and headers.  As this one is designed for academic docs, so it may have some issues to other types of PDF.

2. Borrow the ideas like grobid, to build a system to adapt to the PDF types that you use.
Unified format PDf may get better result, but not for general PDF.

3. Convert a PDF to doc file, and then use doc tools to extract content structures out?


Some brain storming ideas:

1). Use tool like pdfclown to extract position and style info of the PDF text data.

2). Then for same category PDF share patterned style and positions, we have chance to find the structures of file.

3). And based on the style of text, it is possible have a tree-like text structure, but this tree may not match with the real chapters tree. This method can help on section levels and titles

4). How to find the main content text?
By statistical info of the text font, the biggest portion of the occurrence normally the text content section. As the main content has different styles with other part, it is possible get good result here.

5). By check from bottom of each page to center’s first line, if PDF have many pages and a unified footer format, then it possible to find what style and font is for footer, and possible find out the footer pattern by statistical.

6). Use same trick of footer, it is possible find out the header if page have.

7). If we have position and style info of the PDF text data.
Then we maybe can do classification based on the position and styles training too to find the basic structure of file.


Some similar work and papers

a chinese patente for this:

a ch team works called Xed


HP paper

a text classification algorithm
that follows a multi-pass sieve framework to automatically classify PDF text snippets (for brevity, texts) into TITLE, ABSTRACT, BODYTEXT, SEMISTRUCTURE, and METADATA categories.

Tools for PDF text extract

PDF Tools list

Jpedal (commercial software) (commercial software)
itext (commercial software)

apache tika






1. Tools compare and benchmark

PDF format

2. How to remove the header and footer from PDF

3. Find paragraph in text

4. Find sentences

7. Structure extract from PDF

How to build and run the CMU Olympus-Ravenclaw dialog system framework – 3

How the Olympus system works ? This is a summary after I read a bit of the code of Olympus.

1. How system started?

Open your SystemRun.bat, it will call this line:

START “” /DConfigurations\%RunTimeConfig% “%OLYMPUS_ROOT%\Agents\Pythia\dist\process_monitor.exe” %RunTimeStartList%.config

About the startlist.config file and Pythia’s process_monitor.exe (MITRE in old project) , you need to read this page:
Pythia is windows process manager that control many process start and stop. Written by pl and build as process_monitor.exe, it will read in the startlist.config file to control each process of system.

2. How each module started?

Pythia started many process and they communicate each other, each process has its ability as module in system, like ASR, TTS. They work together to make whole system work.

TTYRecongnitionServer is the module to interface with terminal in audio and keyboard input way. Pythia in fact will start this file : ttyserver_start.config to start this process. It will in fact run this cmd as one process:

--start - is to let Pythia start it
--input_line - is to let Pythia open a input box for it on the GUI.

3. What is the HUB?

So how each process feature doing and how these processes communicate with each other is very important now. Each process will be called a server, so there is a HUB to link all the servers together:

So what is a server?

5. So how hub and server exchange data?

There is pgm file defined all servers info, name port, and rules etc, hub just read this file  and then will link all servers to exchange data.
Rules in programs (like main) tell Galaxy what the Hub should do when it gets a certain message

Now you have basic structure of the whole system.


5. So how task is organized as dialog system (ravenclaw) ?

ravenclaw use tree to define the task relations, like example here:


Sub node under a task is a sub task, to finish a task you need to go through from left to right to do each task. And they define a set of macro in C and developer need to use these set of macro to define this task tree:

example like this: