LinkedTV Demo

Following sections describe process of simple analysis of interactions with annotated video.


Aggregated content description

Additional structure over the result of entity classification is provided by DBPedia ontology. Appearance of an entity in a subtitle activates one or more types in the ontology, the activation is spread up to the root. The result of propagation is available in the form of a dendrogram.

Analyzing subtitles

GAIN retrieves subtitles from the videos via the YouTube API and sents them via a web service for analysis to the THD entity recognition system The results are returned within several seconds and contain a list of recognized entities for each subtitle. Entities are assigned a DBpedia URI and DBpedia Ontology Type (where available). A JSON formatted result can be obtained at, where XXX is replaced by a YouTube video identifier, such as k4JstBdOsgk.


Aggregating Interest Clues

Multiple interest clues can be recorded for a specific shot (duration of a subtitle). These are are all aggregated using a list of on hand-coded to a single scalar value of interest. The aggregated interest is shown for each shot and user.

Hand-coded rules:

Skip forward + 10 sec: interest = -0.5
Look at screen: interest  = 0
Not Look at screen:  interest = -1
Bookmark: interest = +1.0
View related content: interest = 1
Volume+: interest = +0.5

Exporting Data

The result of GAIN is a table containing one instance (row) for each shot. Columns correspond to classses from the DBpedia ontology (with uninformative ones omitted) and names of recognized entities, the last column is the interest value. The data can be downloaded as a csv table.

Example of output (JSON format):

        "accountId": "YOUTUBE-TEST",
        "d_o_actor": 0,
        "d_r_war": 1,
        "interest": "1",
        "objectId": "",
        "parentObjectId": "",
        "sessionId": "1380889864168",
        "userId": "1"

Learning Preferences

On clicking the ``Learning preferences'' button, the data are sent via a web service call to the system. Minimum support is set to 2 instances, minimum confidence to 0.7. The discovered rules appear within several seconds under the player. The data export can also be manually analyzed e.g. by uploading the exported csv to the EasyMiner

The data are uploaded to the preference learning module using HTTP PUT of previous JSON output:

HTTP PUT /pl/api/YOUTUBE-TEST/data?uid=1

To start mining the association rules and get PMML format:

HTTP PUT /pl/api/YOUTUBE-TEST/rules?uid=1

You can get previously mined rules in PMML format:

HTTP GET /pl/api/YOUTUBE-TEST/rules?uid=1