AI Corner: Putting AI to Work

By Steve Phelps

We all likely have some experience with AI by this point. For some, it is a tool. For some, it is a partner. And for some, it is an enemy. If AI is an enemy, this is likely not for you. For those who use AI as a tool and partner, you may gain some ideas. Consider this to be an effort in sharing a practice that has worked well, for me at least, in the exercise of conducting research. The examples provided are through the use of specific AI tools which have free and paid versions.

Let us start by being 100% clear, AI is not the thought producer; it is an information organizer and distributor. All aspects of AI use should be verified and vetted for accuracy. The validity of information is dependent on a vast multitude of factors, all starting with inputs from the user.

Most research starts with a question, and the nature of the question is often guided by inquiry into as many of the factors surrounding that question as possible. In academia, the process of research is often informed through the exercise of a literature review. Traditionally, search engines are used such as Google Scholar, EBSCO searches and visits to the library. Searches like this are time-consuming and significant contributors to research fatigue. When searching traditionally, there is a point where the depth of the material being reviewed begins to suffer due to the tedium of reading and deciphering vast amounts of information that is often not relevant.

A tool that can greatly speed up the initial research process is with Litmaps. Litmaps is used by entering a topic, and articles are selected from multiple databases. You have a preview of the articles provided and from the supplied articles, you select the ones that you deem to be most relevant. You then take those articles and put them in a folder where a Litmap is generated. This Litmap finds other related articles (based on your selections) and puts them into a chart where the X axis represents the publication year, and the Y axis represents the number of times the article has been cited. Additionally, the articles are all linked through lines to related articles, which are displayed. When looking through the articles, you are able to remove ones that are not relevant, and this generates additional results through refinement. There is also a filter function where you can organize results based on shared authors, text similarity or shared references.

Now that you have, in a matter of minutes, generated copious articles and done your initial relevance scan, it is time to dig deep into the articles to validate their relevance to your question. This is another huge challenge with traditional research as some relevant elements of articles are buried in the work. While you may be looking for results to an article, the results alone need context. Sifting through the information can be tedious and lead directly back to the previously mentioned research fatigue. To search the articles more quickly, leveraging AI can be incredibly useful. In this step, a helpful tool is NotebookLM. NotebookLM is designed to only look at information that you provide. This means that any questions asked of it will be answered with a review of the articles that you have loaded into it. The significant benefit with this process is that it reduces the hallucination effect (not eliminates). When you have the articles loaded, NotebookLM will generate an overall summary of all sources. You will also have the option to ask questions, and this is where the AI excels. You can ask a specific question about the material and NotebookLM will produce an answer with a citation to the exact place in the articles that it sourced the information. This citation is very important as it allows you to go and ensure that the information is accurate to what is documented in the article. One pitfall with this AI is that it may look at a source article’s Literature Review and provide an answer that seems contrary to the study’s findings; a prime example of validating outputs. Additionally, NotebookLM can create mind maps of the material, which is incredibly handy with qualitative research when coding. There are other functions with NotebookLM that include the ability to source material not submitted by the user, which are relatively new enhancements to the AI. You can also generate slide decks and podcasts.

So, you have successfully sourced some material, filtered it for relevance, and identified specific articles germane to your research question. This is where expansion and synthesis of material come into focus. Research is no good if it is only done to prove a point, that is opinion generation. With that in mind, seeking dissenting data to the hypothesis is essential. A tool for this is the Logically App. Logically allows the user to load articles, which you would have already sourced and vetted, and to ask questions of the material, just like NotebookLM. With Logically, you are able to restrict the source material for answers to the documents you have loaded alone, or the documents and Semantic Scholar, or the documents and a general Google search. Like NotebookLM, Logically provides sources and links to those sources. The benefit of Logically is that questions can be asked that go beyond what is contained in the source material and it allows for questions that are contrary to the existing data to be asked. While it should not need to be repeated, the user must always validate the information received.

A challenge with researching anything is the amount of time that it takes to develop a question into a narrowly focused and concise basis for inquiry. Using these AI tools has served to save time and effort in the information gathering efforts. A benefit of using these tools in this manner that may be overlooked is that there are often more articles read, to a greater depth, than what would have normally been read. This happens with the validation process and by being exposed to parts of the articles that may have otherwise been ignored. Another benefit of using AI in this manner is that the user becomes familiar with how others use AI, and how it can be abused. For the three tools discussed here, there are many more features not discussed. Further, updates and capabilities continue to evolve that are likely to only make the tools more robust.

These are certainly not the only tools and possibly not the best; they are, however, ones that I have used, continue to use and continue to appreciate. There are limitations and issues that manifest with the tools, which is why validation is paramount, as is the understanding that AI is not the thought producer; it is an information organizer and distributor. Additionally, the three AI platforms discussed here have free and paid versions, which will dictate some of the capabilities. Finally, everything you just read was written by a human, the following link, though, is an AI-generated podcast of this, just for fun.

Stop Research Fatigue With Litmaps and NotebookLM Podcast

For more information, please contact Steve Phelps by email or on Teams.