Off Campus Access Dilemma

How do students and employees at an institution access resources off campus? From my experience, many library staff members will say many people use Google Scholar but the students really should start on the library homepage because that’s the best way to access library resources.  Is this really an answer librarians should be satisfied with? Is that the confines we want to limit people to? Have you ever tried to access scholarly material like a student or faculty member would?

I recently attended a talk at ENUG by Rich Wenger entitled “IP Filtering is Dead. What’s next?” which touched heavily on this topic. Rich mentioned a fascinating video on the Scholarly Kitchen blog that presents a real life use case of the stumbling blocks faced by students and researchers and what a poor user experience that can be. It’s worth watching especially if you’ve never tried to access articles off campus. To address some of these issues by taking an entirely new approach to access, Rich talked about an exciting collaboration forming between subscribers and vendors called RA21. I look forward to future directions and outcomes from this adventure.

Meanwhile, I’ve been frustrated myself with accessing resources off campus, which is particularly agitating since I know the systems quite well and am still hitting friction and pain points in accessing stuff my library subscribes too.

So earlier this year I took a stab at making access slightly easier and discovered  something called LibX as well as a GettingThingsTech blog post about different proxy re-direct options. This was great information and I diligently set out to try many of these options. Essentially the options allow you to click an addon or something and have a page re-direct through your institutions proxy. I found that the Chrome extension works great but it’s only available on larger screen devices which isn’t useful on phones. I won’t install outdated addons, so the Firefox and Safari options were a no go.  However, the Zotero customization is something I recommend to everyone who actively uses Zotero.

As for the LibX addon for Chrome and Firefox, I successfully set one up for my institution and, while it works and has many possible features like a direct search of your discovery service or catalog, I wouldn’t recommend most libraries tinker with this option. It took me a few hours to get the configuration correct and I think there are better options that require less clicks.

I continued my search and I discovered a bookmarklet option described by UCSF Libraries that uses a bookmark with javascript in the url field to redirect pages through your proxy server. If you have javascript enabled on your device (and you probably do unless you’ve turned it off), this is a fairly simple option that works on any browser and any device. I simplified the directions and made them a little more browser agnostic before sharing with some faculty. The response so far has been a resounding appreciation for the simplicity of this workaround. So, an acceptable option but not preferable has been found that makes for a slightly more seamless user experience but the journey will continue.

Can the library community make this user experience better? I know we can, it’ll just take some time, collaboration, and imagination.

Capturing Search Terms using Google Analytics

If you use Google Analytics, do you ever wonder what the “Search Terms” in the dashboard could mean? One library-centric (and probably highly contentious) use is to capture search terms used in online databases. In the broad sense, these databases are any search entity you may have such as journal finders, discovery layers, ILS catalogs, etc. If you can add Google Analytics code to the website and the search terms are retained in the url, you can almost definitely capture search terms. (Note: there are most definitely other ways to gather search terms, but I’m sticking to this version for now).

In your Google Analytics account, go to the Admin for the account/website you want to capture search terms for. Under View, click View Settings. This is where you can adjust the basic settings regarding what you want captured. You’ll need:

  • the website’s base url (most likely everything before a question mark)
  • your current timezone
  • parameters you want to exclude like session id (look at the url and/or any documention to determine what parameters you don’t want to be analyzing)
  • turn “Site search Tracking” on
  • the query parameter(s) (again investigate that url for your search term and enter the identifier used to signify what is the search term)

Don’t forget to click Save.

Enjoy watching the search terms roll in and perhaps you’ll discover the need to purchase items in a different subject area or the need make it easier to find certain resources.

Tracking outbound urls on the library’s homepage

After a long hiatus from tinkering with library technology due to chairing a classroom renovation committee and doing the backend work of a book inventory project, I finally got to some of my sidelined to-do lists.

Google Analytics (GA) is one popular tool used for tracking website usage, however, the default setup only tracks usage within the domain listed in the settings. For a library with lots of links to external resources like catalogs, journal finders, databases, etc. the default setting can feel lacking. It’s also only so interesting to know x number of people visited your site, spent at most a few minutes on your homepage and left. Event Tracking solves that dilemma.

At first, Event Tracking looks like a lot of coding but it doesn’t have to be. Google Developers pages so nicely directed me to some GA code on GitHub called autotrack. By adding the javascript file to your webserver and a few lines of code to your already existing GA code on your website . If you don’t already have GA, GA gives you the basic code to insert when setting up a site and the additional autotrack lines get inserted into that.

Right now, I’ve added outboundFormTracker to track our LibAnswer search box, eventTracker to track our EDS search box (it wasn’t acting like a form for autotrack, so I did have to add a small amount of code to the submit input item), outboundLinkTracker to track everything else. So far, the heaviest usage from our homepage is EDS and our database list. I look forward to seeing what is and isn’t really used over the summer and into the fall semester.

Updating ISO keys in the Customization Manager

A quick tip for those starting to use the ISO protocol with ILLiad, especially if you are hosted by OCLC. It turns out that if you edit any of the ISO keys in ILLiad’s customization manager, the ISO server needs to be restarted. At the moment, it doesn’t automatically restart on a scheduled basis.

So after you edit those ISO keys, submit a ticket to OCLC to restart the ISO server.

JQuery Dialog Box Follow-up

Some have requested more details for the post I wrote earlier this year: Creating a Popup – like Message on ILLiad Webpages using JQuery

Where did I put the dialog box script?

I added the main script code to the file containing  <#STATUS> . In our case, I only added this to the include_header file.

What code did you use?

I needed to add code to two places. In the file containing the <#STATUS> line, I added the following surrounding by the <script> tags.
$( "#dialog" ).dialog({
modal: true,
title: "Status",
buttons: {
"OK": function() {
$( this ).dialog( "close" );
}}
});

For each web status key of interest in the customization manager, I added id=”dialog” to an opening div tag, wrote the message I wanted to display, and added a closing div tag.

Has this helped?

Absolutely! We get way less questions from staff, students, and faculty.

Lessons Learned: Decreasing Bandwidth Usage

I recently received a message that a site I manage for a library organization was about to exceed its bandwidth allotment. There is a small user group of webmasters within this organization and bandwidth limits on the wordpress .org sites occasionally make the email discussion lists.The typical suggestions are:

  • ensure you have a robots.txt file
  • install a wordpress plugin like WP Super Cache that caches pages
  • ask for more bandwidth

The first two suggestions were put in place ages ago and I did end up pursuing the third suggestion, an option I was grateful to have. However, I knew there had to be another way to reduce my sudden spike in usage since it was not attributed to more visitors.

In fact, my spike seemed to correlate to some pdf files that I put in a post.  I considered that action routine and almost trivial at the time, but, wordpress created a “preview” of each pdf that caused the entire file to download every time the page was opened. Considering some of these files displayed on the homepage, this greatly increased our bandwidth usage.

Lesson learned:

  • Host slides, large pdf’s, videos, and large photos elsewhere (the function is your friend)
  • Compress any file before uploading it to your site
  • Carefully consider whether to preview pdf files stored locally on your server

Here are some good explanations I found after coming to this realization:
Reduce Your Website’s Bandwidth and Storage Usage
10 Reasons why you should never host your own videos

Harvesting Institutional Repository Records

One of my final summer projects before campus descended into controlled chaos was integrating our institutional repository records from BePress into our discovery layer from EBSCO. As usual, I learned some interesting tidbits along the way.

To get started, Bepress has some good information about harvesting records from their system:  Digital Commons and OAI-PMH: Harvesting Repository Records. Much of this resource is about using the Open Archives Initiative Protocol for Metadata Harvesting. What’s neat about this protocol, is that anyone with an internet connection can obtain the metadata and contents in a consistent format. I know this doesn’t sound impressive but ask me if it was easier to ask EBSCO to add our institutional repository or our catalog of books to the discovery layer and, hands down, the institutional repository wins. Some of the stuff involved in extracting and displaying our library catalog included: extracting marc records in specific formats; uploading the file to an FTP site; and converting that file to a format ready for our discovery layer based on lots of field mappings that are specific to our library.

On the other hand, our institutional repository metadata and contents can be viewed by using this link: http://digitalcommons.esf.edu/do/oai/?verb=ListRecords&metadataPrefix=oai_dc  The link is essentially the base url for the repository, with a few “commands” attached. The same can be said for obtaining several other details about the content including the field abbreviations found in the <setSpec> field in the previous link. The setSpec details can be viewed by adding /do/oai/?verb=ListSets&metadataPrefix=oai_dc to the base repository url. Check out the OAI-PMH documentation for more possibilities.

So in theory, the metadata fields from OAI-PMH repositories should be the same and people/vendors/groups who want to use that information in different interfaces can create a method that is easy to replicate.

Mergefield tip for ILLiad Print Templates

Mail merges are a powerful tool in MS Word and using them to enhance ILLiad’s print templates is no exception. It’s amazing what kind of “If Then” type statements can be created, such as “If the Delivery Option is “Mail to Address”, then display the address lines you want, If not display something else.” But I always forget how to view, for lack of a better word, the code, behind the mergefield statements and rules. The trick…

Alt + F9

Hitting those keys will toggle between something similar to a preview and the full mergefield “code.” This is important for formatting and layout as well as getting the mergefield rules correct. The rules can take up a lot more space compared to the real deal.

Mass Find and Replace in Notepad++ using Regular Expressions

While working on an upgrade to ILLiad 8.6 from 8.5 I was reminded that the tabindex attribute is not necessary and that I haven’t yet completely removed this code from our ILLiad webpages. I’ve removed some of the references, but at first glance, I’ve assumed this task would be daunting, after all, there are dozens of webpage files that would need to be checked.

Not so much! With about 5-10 minutes of thinking/work, I was able to remove over 400 instances of tabindex in all of our ILLiad webpages!

After some testing on a small batch of files I settled on the following process:

  • First, I created a regular expression to match our tabindex instances: (tabindex=”)\d+(“)
  • Then, I opened all of my local ILLiad webpage copies in Notepad++
  • Using the Replace feature (Search->Replace), enter the regular expression in Find What
  • Under Search Mode, select Regular Expression
  • Click Replace All in All Opened Documents
  • Then click File -> Save All
  • Finally copy the local files to the server

ILLiad’s Odyssey Helper

Why I haven’t persevered and figured this out until now shall remain a mystery? Those who already use Odyssey Helper already know how valuable this feature can be. Basically, Odyssey Helper reduces monotonous clicks and improves efficiency by allowing scans to be done outside of ILLiad and then sent through Odyssey without touching ILLiad again. No more scanning, then opening the request, clicking “Mark Found, Scan Now”, clicking “Ok” to the Scanner Not Found error message, and finally clicking “Send Via Odyssey.”

Atlas Systems has a nice video on the topic: https://atlas-sys.wistia.com/medias/ner9rl1d3x

The trick for testing with Doc Del that isn’t mentioned in the video or the documentation is that you need to change the “Process Type” to navigate between Lending and Document Delivery. They must be processed separately in Odyssey Helper.

Currently, I’m setting up Odyssey Helper so that the pdf files reside on the library’s shared network folder and Odyssey Helper is open on one designated computer. That way scans/downloads can be done by several individuals on different computers, yet sent from one location.

Note: In version 8.6 Odyssey Helper is morphing into Electronic Delivery Utility.