(Image of Radcliffe Camera, on a wall in Oxford. Copyright Karen Dolman, 2018).
Recently, at home, we signed up for the Hive system for our heating. As part of the deal, we now have an ‘Alexa’ device, which has been a bit of an eye-opener for us both. My partner is very techy, but not very digital, and I’ve had to have a few ‘Librarian’ moments with him in terms of using it correctly (for example, he asked her to ‘play Peatbog Faeries’ and she didn’t understand. I asked her to ‘play music by the Peatbog Faeries’ and she understood, rewarding us with some very good tracks!).
So, this little issue made me think about how this technology will impact on how we teach students and also how it will impact on how they access information – we already have issues when trying to explain how to use key terms and words; how much more difficult will this be when we have to try to explain how to get the best from an AI system? As Librarians, we are used to the language of key terms, concepts, controlled vocabulary and, in Health and Medicine, MeSH Headings. However, this is something that not everyone will understand or develop and this leads me to wonder how we can develop these skills to make sure our students, and the populace at large, are accessing correct, relevant and appropriate information?
I saw a conference advertised a while ago, at which one of the presentations was ‘Skills for the future academic library‘. I couldn’t attend due to staffing demands, but my manager went and said this was a very interesting presentation. I immediately followed up with this and realised that, apart from being a future consideration, AI in libraries is already here. The presentation highlights that such systems as ‘Deakin Genie’, ‘Revision Assistant (Turnitin)’ and chatbots are already in use and becoming accepted as a way of communicating and disseminating information.
At Hallam, we don’t have any automated systems as yet, but we have recently introduced a new portal, MyHallam, which has a knowledge-base for students to search. The idea is that they should be able to find answers to their queries within this system, only coming to seek help should they not find an answer within the system. The knowledge-base will grow with enquiries that come in, and, eventually, students should find answers to all of their questions, as well as interacting with us via this portal. It’s not really AI, but I envision that we wont be long in subscribing to one of the above-mentioned systems. We already have access to an online 24 hour academic feedback service, so we are definitely moving in that direction.
Returning to the research issue: one comment is particularly telling in the presentation. A researcher points out that AI may make journal publications obsolete as a way of communicating research as, rather than subscribing to many different journals, researchers could just subscribe to a filtering service which will tailor their research strategies to personal need and the algorithms will do all the work for them. Hmmm, interesting.
For me, this begs the question: how do you know you are getting the best results? I may be being naïve here, but the elephant has to be grasped by the trunk. When I construct a research strategy and apply it to the database, I can be flexible enough to have the opportunity to do the initial scanning of articles myself, in order to eliminate any that may not be peer-reviewed or of appropriate provenance etc…I am not sure, given the rigid nature of algorithms, that a filtering service would allow this flexibility and also that it would give me the best results every time, with this in mind. A conversation with a colleague about this also raised the issue that algorithms are constructed by people and therefore could reflect the biases and opinions of those people – ie white, middle-class and male. This is another issue that I think affects the validity of such systems and which is exactly why they are unreliable in my view. People make mistakes.
Look at Facebook: it works on algorithms, the same as any other system, but at certain points it seems to get stuck in a loop – I regularly have to go in and change my settings to make it do what I want. And I only see consistently the results that I interact with, which generates more of the same results, which I interact with, and so on. Applying this to a research methodology – how can I be sure that this isn’t also happening with a filtering service? Given that, when conducting research, the idea (especially in my areas) is that we need to find as much of the body of literature that is available as possible, then select the most appropriate using screening methods and evaluation tools. How can a filtering system manage that, given these algorithmic limitations? Indeed, one of the issues raised about the use of AI in the presentation is that of accuracy and validity.
However, I also realise that a lot of these systems are ‘learning’ systems (heuristics, for example), but I still beg leave to doubt, as there are questions about whether these still will give the best results. The possibilities inherent in these systems are infinite, but I think, for me, there needs to be more evidence of their effectiveness before I subscribe wholeheartedly. We need to make sure our students (and staff!) understand the value of good research techniques, as well as the systems out there designed to help them find the appropriate resources and how to use them effectively, and their place in the research process. As a Librarian, I am all about saving the time of the user, but there is a definite ‘cutting corners’ feel to these systems that I am rather wary of. Watch this space…
Back to the original subject however, Alexa has proved to be a hit in our home, although we still can’t make her talk to the heating system…I’ll leave that to the technician in the family.
References:
Cox A, Pinfield, S & Rutter, S. (2018). Skills for the intelligent library. CILIP Briefing: Skills for the future academic library. London: CILIP