Location visible to members
On Saturday January 30th, Jans Aasman from Franz Inc will give a four hour tutorial on working with AllegroGraph. The first 45 minutes will be spent on why people might want to work with a triple store and we'll discuss a number of use cases for triple stores. Then we'll dive into the tutorial and cover the following topics.
1. Basic Operations: Creating, opening and closing an RDF triple store. Adding triples manually or by loading from an RDF file, deleting triples and getting triples out of a triple store.
2. SPARQL: the standard W3C query language for the Semantic Web.
3. Prolog: Creating rules and querying the triple store with Prolog.
4. RDFS++ reasoning: AllegroGraph covers all of RDFS and most of OWL. We'll do a quick 101 on the basic concepts of Reasoning.
5. Full text indexing and how to use it in SPARQL and Prolog.
6. Named Graph: working with the fourth element of a triple. All the triple stores in existence are now quad stores but for historical reasons we still call them triple stores.
7. Range Queries: AllegroGraph supports very efficient range queries over all numeric types, dates, telephone numbers, etc.
8. Social Network Analysis: AllegroGraph supports a full range of social network analysis functions. We'll do an interesting demo on a semi scrape of Facebook.
9. GeoSpatial queries. AllegroGraph supports efficient geospatial indexing. We'll show how to use that in SPARQL and Prolog. We will also show this to working with Google Maps.
9. Temporal reasoning. Many users of AllegroGraph work with events that have
a temporal extent. We created a set of temporal primitives that make
it easy to work with time intervals (basically the quantitative
version of Allen logic)
11. GRUFF: most of the above functionality will be demonstrated in the visual navigation tool for AllegroGraph.
12. AGWebView: a web based interface to AllegroGraph.
13. A discussion of the Python and Java interfaces to the server version of AllegroGraph.
In addition to the tutorial Jans will present a few demos.
Demo 1: Combining Google News, Entity Extraction, and Linked Data in a Triple Store
Entity Extraction is a technology to extract entities such as names, locations, dates, and industry specific terminology from text. We applied a professional entity extractor (Cogito from Expert System S.p.A.) to a scrape of Google News. Our scraper takes from a particular day all of the main categories in Google News, finds all of the subcategories, then within those, scrapes five articles. This usually results in about 700 articles a day. We put the extracted entities in AllegroGraph and then link all the people and all the places mentioned in the news articles to public linked datasets such as the DBPedia and Geonames.
We’ll use Gruff to demo some queries on this dataset that are currently impossible with regular search engines.
Demo 2: Combining a set of public life-science data sets in AllegroGraph.
We took the publicly available Drugbank, DailyMeds, Sider, Diseasome and ClinicalTrialDB and loaded them in AllegroGraph. Then we created additional links between the texts in ClinicalTrialDb and specific drugs, diseases,targets and side effects. With that we can do some very interesting discovery in ways that are nearly impossible without Semantic Technology.