Over 10,000 physical typewritten documents from 1932 to 1941 had to be digitised, structured, and connected in order to create a single, centralised source of knowledge, for enabling the analysis of historical processes.
‘The amount of savings in time and effort [the search optimization] can deliver for our home offices, for our customers, is incredible.’
--Mayank Gupta, SVP for data, LPL Financial
Christophe Willemsen, CTO, GraphAware, explains how to apply NLP to extract entities and key phrases to build and search knowledge graphs
Mayank Gupta, SVP of Data and Wren Chan, VP of Foundational Architecture and Innovation from LPL Financial present how they use GraphAware Hume and Neo4j to power financial chat bots.
In this informative talk, Christophe Willemsen, CTO at GraphAware, provides valuable insights on how to use Neo4j’s Lucene-based search engine to achieve relevant search results. With Willemsen’s expert guidance, you’ll learn useful tips and tricks for optimizing your search queries and improving the accuracy of your search results. As the CTO of GraphAware, a company specializing in graph database technology, Willemsen is well-versed in the capabilities of Neo4j and is well-equipped to share his knowledge with you.
See how combining technologies adds another level of quality to search results. In this new Refcard, we include code and examples for using Elasticsearch to enable full-text search and Neo4j to power graph-aided search.
Neo4j as a viable tool in a relevant search ecosystem demonstrating that it offers not only a suitable model for representing several complex data, like text, user models, business goal, and context information but also providing efficient ways for navigating this data in real time. Moreover at an early stage in the “search improvement process” Neo4j can help relevance engineers to identify salient features describing the content, the user or the search query, later will be helpful to find a way to instruct the search engine about those features through extraction and enrichment.
Moreover, the talk demonstrates how the graph model can provide the right support for all the components of the relevant search and concludes with the presentation of a complete end-to-end infrastructure for providing relevant search in a real use case. It will show how it is integrated with other tools like Elasticsearch, Apache Kafka, Stanford NLP, OpenNLP, Apache Spark.
From user preferences and location to time of day and weather, complex context representations have been the key to delivering personalised content. Graph databases excel at dealing with large amounts of complex data and therefore, they have been at the core of many modern real-time recommendation systems. In the near future, graph databases will play an equally important role in search personalisation.