Under the Hood
Last updated
Last updated
While viewing any Library.Link Network linked data resource, click on "Share" then click on "Raw Data."
A Library.Link site can have millions of individual Web pages. In fact, the average single Library.Link site alone has more pages than all of English language Wikipedia. The way crawlers for search engines and other applications work is that they attempt to load all the pages they can find, and follow links on that page to find additional pages. This can be a slow and unreliable way to discover all the pages on such enormous sites, so Library.Link uses a mechanism recommended by search engines, called a site map.
A sitemap is a file where you list the web pages of your site to tell Google and other search engines about the organization of your site content. Search engine web crawlers like Googlebot read this file to more intelligently crawl your site. There are several formats of sitemaps, but the standard and most popular is the XML format. This involves presenting files on the Web such as
[site domain]/harvest/sitemap.xml
And numbered variants such as
[site domain]/harvest/sitemap3.xml
These files are submitted to the search engines by the Library.Link service, as recommended.
Ideally sitemap files themselves would not appear in search results, because they are meant to support search engine crawlers, not actual users. Unfortunately search engines are known to include sitemaps in their indexes, and so these sometimes appear in results.
This is not something under the control of the Library.Link Network. It affects all sites that use sitemaps, not just Library.Link sites. In the screenshot below you can see a Google search result where Google’s own sitemap appears, as well as one from popular technology site TechCrunch.
In theory, as search engines improve their indexing of the web in general such sitemaps should either become very rare, or should cease appearing, but again this is not something we can control.
We currently process $0 values as AuthorityLinks and transform known control numbers (Library of Congress, OCLC, VIAF, NLM, etc.) into URIs. These URIs are further used in our authority reconciliation and enrichment processes.
Depending on what language you prefer, we recommend the following options for RDFa parsers:
Python: https://pypi.org/project/versa/
A quick example for using Versa from Python interpreter:
Bi-directional linking is best practice on the Web. To implement bi-directional linking for your library, use HTML on your library website to connect to your link domain.
You can create your own HTML or edit one of the examples below. Replace [Library Name] with your library's name and "http://baseurl.library.link" with your base url. You can also update the size of the Library.Link Network logo by updating width and height.
HTML with color logo:
HTML with black and white logo:
The Library.Link Network is committed to ensuring digital accessibility for people with disabilities. We are continually improving our data pages and applying the relevant accessibility standards. Library.Link Network pages are designed to drive a number of applications and are not intended to be end-user sites beyond inspection of library data.
The Web Content Accessibility Guidelines (WCAG) defines requirements for designers and developers to improve accessibility for people with disabilities. It defines three levels of conformance: Level A, Level AA, and Level AAA. The Library.Link Network is partially conformant with WCAG 2.1 level AA. Partially conformant means that some parts of the content do not fully conform to the accessibility standard. We encourage all applications that run on top of Library.Link Network data to comply with WCAG 2.1 Level AAA.
We welcome your feedback on the accessibility of The Library.Link Network. Please let us know if you encounter accessibility barriers on The Library.Link Network:
Phone: 1-888-993-1114
E-mail: support@library.link
We try to respond to feedback within 5 business days. Any accessibility issues that cannot be resolved immediately will be listed as known issues here until addressed by a future release.
One use of Linked Data is to help Google, Amazon, Microsoft and others answer natural language questions. Linked Data for libraries helps library data become one of the many sources that allows a person to ask their Google Home the question, "Hey Google, find me a great, scary book" and for Google to be able to provide an answer.
For the above example:
“Hey Google, find me a great, scary book” requires unpacking a lot of implicit context and natural language to answer the question.
“me" = personalization including search history (advanced reading vs children’s books) including my permissions, where I’m located, what is around me.
“scary” = based on personalization, appeal + subject terms to narrow in on the answer (btw, we don’t see any search engine answering this without appeal terms)
“great” = popular / reviewed / accredited / awarded
“find” = book that is (audio, ebook, print) that reflects my previous interactions and works with the device I’m using to query the system (phone, tablet, Assist etc.)
No one group has all this information, so Google is looking to stitch an answer together from different sources, including library data in Linked Data formats.
When you search for a book, find the Borrow section of the Knowledge Panel that shows up in Google. Your location may be used by default, or you can enter your location into the search bar.
Library locations and Borrow Actions are integrated into Google's Knowledge Graph. For example, see how each format in the Borrow Action menu links to Dallas Public Library's Polaris catalog in the example below via the Library.Link Network.