While viewing any Library.Link Network linked data resource, click on "Share" then click on "Raw Data."
A Library.Link site can have millions of individual Web pages. In fact, the average single Library.Link site alone has more pages than all of English language Wikipedia. The way crawlers for search engines and other applications work is that they attempt to load all the pages they can find, and follow links on that page to find additional pages. This can be a slow and unreliable way to discover all the pages on such enormous sites, so Library.Link uses a mechanism recommended by search engines, called a site map.
A sitemap is a file where you list the web pages of your site to tell Google and other search engines about the organization of your site content. Search engine web crawlers like Googlebot read this file to more intelligently crawl your site. There are several formats of sitemaps, but the standard and most popular is the XML format. This involves presenting files on the Web such as
And numbered variants such as
These files are submitted to the search engines by the Library.Link service, as recommended.
Ideally sitemap files themselves would not appear in search results, because they are meant to support search engine crawlers, not actual users. Unfortunately search engines are known to include sitemaps in their indexes, and so these sometimes appear in results.
This is not something under the control of the Library.Link Network. It affects all sites that use sitemaps, not just Library.Link sites. In the screenshot below you can see a Google search result where Google’s own sitemap appears, as well as one from popular technology site TechCrunch.
In theory, as search engines improve their indexing of the web in general such sitemaps should either become very rare, or should cease appearing, but again this is not something we can control.
We currently process $0 values as AuthorityLinks and transform known control numbers (Library of Congress, OCLC, VIAF, NLM, etc.) into URIs. These URIs are further used in our authority reconciliation and enrichment processes.
Depending on what language you prefer, we recommend the following options for RDFa parsers:
A quick example for using Versa from Python interpreter:
>>> import urllib>>> from versa.reader import rdfalite>>> from rdflib import ConjunctiveGraph>>> site = 'http://link.lib.rpi.edu/resource/PS4pKuKy4cg'>>> g = ConjunctiveGraph()>>> with urllib.request.urlopen(site) as fp: rdfalite.tordf(fp.read(), g, site)...>>> len(g)99
Bi-directional linking is best practice on the Web. To implement bi-directional linking for your library, use HTML on your library website to connect to your link domain.
You can create your own HTML or edit one of the examples below. Replace [Library Name] with your library's name and "http://baseurl.library.link" with your base url. You can also update the size of the Library.Link Network logo by updating width and height.
HTML with color logo:
<p style="text-align: center;"><a href="http://baseurl.library.link" target="_blank" rel="noopener">[Library Name]</a> is a member of the Library.Link Network</p><p><img style="display: block; margin-left: auto; margin-right: auto;" src="http://library.link/img/library_link_logo.png" width="100" height="38" /></p>
HTML with black and white logo:
<p style="text-align: center;"><a href="http://baseurl.library.link" target="_blank" rel="noopener">[Library Name]</a> is a member of the Library.Link Network</p><p><img style="display: block; margin-left: auto; margin-right: auto;" src="http://library.link/img/library_link_logo_bw.png" width="100" height="38" /></p>