Interface agents and human control
We hear a lot about Radical Trust, with the emphasis being on trusting users (of systems, websites, etc.) to guide organizations. I have tried to sound a skeptical note at times, pointing out that something called “groupthink” is the danger when you decide to trust the wisdom of crowds. I’ve always most admired people whose ideas were extremely unpopular, and who may even have been cast out of their own communities, but whose ideas proved to be true or whose work turned out to be of great value later on. Only because they worked independently and recorded their ideas or art for posterity did we benefit from their thinking. Radical Trust is primarily about trusting the average (or so it seems to me).
Right now, however, I want to focus on a different but related contemporary problem of trust, and that is the trust that we put in increasingly intelligent machines to help us do what we want to do. I’m limiting this discussion mainly to the way search engines work differently than the earlier generation of database interfaces that information professionals used, which were operated using pure Boolean logic. There is a whole host of interface agents, however, that are designed to do some of our thinking for us. “Smart” is the signal, in marketing campaigns, that a new level of AI is being applied in a service, for better or worse. (I remember when I first saw “smart cards” advertised, I thought, “Just what I need – an ATM card that is smarter than me.”)
I received my education in information retrieval at a time when simple Boolean searching was still the norm. Boolean searching meant that the searcher could construct effective search expressions based on clear knowledge of what the machine is doing. Knowledge of how to use Boolean logic combined with knowledge of what is in the database (size of the database relative to the desired results set, likely frequency of search terms, etc.) was what a professional needed to do skillful searches with good results.
A search engine algorithm, on the other hand, works fundamentally differently, and is designed to do some of the user’s thinking for him. Not only does it incorporate relevance ranking in its display of results, but it determines what will be in the results set according to its relevance formula. It does not simply determine what is in or out of the results set according to the presence or absence of search terms. It determines what is in or out of a result set based on a calculation of relevance measured against a numerical threshold. Not all items in the result set will necessarily contain all the search terms, and not all items with a given search term will appear in the results set.
In practical terms, a Boolean interface provides exact control but requires a higher degree of skill, while a search engine offers weaker control but requires less skill. It is true that there is skill involved in effective use of a search engine, and it is true that this skill involves the same kind of knowledge of what is in the database (i.e. being able to roughly predict what kind of a search will work based on a sense of what is out there). But because search engine algorithms are proprietary, complex, and change frequently, it is not possible to have the kind of knowledge of the system’s workings that one would need to control one’s search results nearly as tightly.
This means that we have to trust the interface (just as library patrons who wanted a database search formerly had to trust us as search intermediaries), and give up a degree of control.
The results are sometimes frustrating, and as interfaces become smarter, the frustration can increase rather than decrease. For example, I have noticed recently that Google has started to include similarly spelled words as hits in its results, beyond merely suggesting alternate spellings. This can make it more difficult to search for a person who has a name that is an alternate spelling of a common name (where in the past their odd spelling made it an easier search). Just a small example of the way that a smarter interface can make it harder to do what you want.
Part of the problem is that interface agents are programmed according to the patterns of a common denominator of users, while as information professionals we tend to search differently from the average user. We expect more precision from systems, and as systems do more of our thinking for us, we are losing our ability to get that precision. Most people aren’t interested in the degree of precision that we are, or at least don’t have a clear concept of how to control an interface in order to get it, or the time or inclination to learn the necessary skills.
This problem interests me as a librarian, because I’m concerned about deprofessionalization and disintermediation in our field, but the broader issue also interests me as an observer of society. Not only are interfaces becoming smarter, but the databases with which they interface us (government, commercial, medical) are becoming integrated. This means that we are being encouraged to trust what is gradually becoming a unified interface to a decison-making network of software that makes assumptions based on averages and data of unknown quality. Interfaces that were initially transparent tools have become opaque agents in their own right, with consequences for our ability, ultimately, to have control over our own lives. It seems like sci fi, but to an extent it is already here….
3 comments on “Interface agents and human control”
This applies directly to LIS work when it comes to these new, overlaid “Discovery Tools” that sit on top of the OPAC, the Databases, etc, and pull results from multiple resources and dump them into one results queue. It would be fine if you want “something, anything”, but generally *I* want to be more specific than that. After a certain point a researcher needs to be able to judge what they can safely *IGNORE*. Because human lives and our time are lamentably finite, What NOT to read becomes an even more important question than what to read.
I don’t mind Discovery Tools per se when IT GIVES ME THE OPTION TO TOGGLE THEM OFF and get back to the basic OPAC view/search interface. But I’ve bumped into some Discovery Tools that DON’T let you do this, and this ticks me off…it certainly isn’t going to “save the time of the user” (per Ranganathan) in that instance. It may save the time of novices, but will WASTE the time of experts, and it really is ‘dumbing down’, no matter how Web/Lib 2.0 tries to spin it otherwise.
Open the pod bay doors, HAL…
My favorite thing about Dialog (which I did get to play with in library school just a few years ago) was how specific I could be about what I wanted and how I knew what I was getting and how it was retrieved when I searched. I don’t think relevance ranking is inherently bad, particularly if you have a fairly basic query and don’t want to get the last in, first out gunk of a regular OPAC keyword search. But I still like transparency in terms of search mechanisms, and I think you’re right to worry about such mechanisms becoming more obscure — and about the possibilities for political obfuscation that such secretive mechanisms provide.
Comments are closed.