Dear decision-makers, your employees care

Some of my readings from the past few weeks

This past two weeks, while others have been working hard on their writing in connection with the Academic Writing Month (#AcWriMo), I have been catching up on my reading. A recurrent topic in the books and articles I have read so far is the way IT systems have been used, within the last 15 to 20 years, in order to carry out profound changes in the way work is distributed, structured, monitored and evaluated. Ironically, those changes have led to a significant loss in efficiency, while work-related physical and psychological symptoms of ill-being, both in the workplace and at home, seem to become ever more common.

One of my colleagues, with whom I found myself sharing my surprise at those so openly counterproductive developments, explained to me that these changes were largely due to a certain view of human workers, where money was seen as the single motivation behind employees’ carrying out their tasks as required. In other words, it is assumed that absolute control is needed in order to ensure that workers do their work properly, since they otherwise will do everything they can to get their pay without fulfilling their part of the contract. This really was an eye-opening statement for me, as I realized that some of my first efforts in trying to improve a work-related computerized tool had failed for precisely this very reason. Let me recap.

A while ago, I was offered the possibility to evaluate the usability of a system used to manage different types of documentary resources. Since the interface of the system was about to be changed, my focus was less on the specific characteristics of the current interface than on the structure and contents of the main work processes supported by the system. After observations and interviews with different staff members and a thorough analysis of the results, I had come to the following suggestion for improvement: reducing the number of open fields displayed in the form used to create a new record in the system by taking only the fields most frequently used for each type of resource. This way, employees would not only get an immediate overview of the required information – making the use of the interface more intuitive – but would also be able to fill in the form without unnecessary scrolling and clicking, enabling them to be faster. To me, this seemed to be a very simple and cheap solution to implement, especially in consideration of its very high potential impact on efficiency. However, this proposal was met with a direct veto by one of the main decision-makers behind the system’s design. His argument: the employees would no longer bother to create high quality, “full” records if the possibility of not filling in all the fields was somehow presented as more acceptable to them.

I was stunned. I argued that employees did not have the time to fill in all *theoretically* required fields anyway, since it demanded an amount of time they just were not given. In consideration of their limited resources, they just had to prioritize the information they entered about each document. Most importantly, all employees had shown an incredible dedication in creating records as complete and consistent as possible, in spite of the hurdles the system put in their way when doing so. As such, it seemed very unlikely that a simplification of the form’s display would lead to a loss in quality – the contrary was more probable, since employees would be quicker in filling in “obvious” information, and would thus have more time to pay attention to more unusual, document-specific characteristics. However, in spite of my attempts at convincing my audience, I left the room knowing that my report and the suggestions it contained to improve the system would end up in the paper bin, unread.

At the time, I was not able to put words on what had happened; I could not understand the perspective of that key decision-maker who had flatly refused to listen to my arguments in favor of improving a badly designed, and very inefficient, system. In light of my colleague’s words however, it all started making sense. But let me say this: if people did not want to make a good job, and were not intrinsically motivated to do so, they would not care about bad IT systems. They would not feel bad about the lack of efficiency those bad systems generate – as they are getting paid a fixed amount of money each month, why should they care about “producing” as much as possible? Indeed, if money actually were people’s only drive, employees would be happy doing just average work and quantity would not be of any consequence to them. They would go home satisfied with their workday no matter how many usability-related issues they have encountered, no matter how much time they have spent on struggling with their digital tools. There would be no frustration, no (di-)stress, no burnout. People would probably get bored, but they would find workarounds for that, too. All would be well.

As all is not well, however, a sensible conclusion is that people care. They want to do a good job, and take pride in doing it as best as they can. They get frustrated when the digital tools they use get into their way, and prevent them from reaching their goals. Often, those negative feelings follow them home after work. Increasingly, those negative feelings affect their psychological and physical health.

I wish decision-makers would recognize that, and take steps in order to support and empower their workforce – for example by providing them with flexible IT systems leaving them the freedom to determine how to best carry out their work tasks.

Using social media as a [newbie] academic – Part 3: the dilemma(s) of integrity

I have always been interested in the ethical dilemmas brought about by social media. (Actually, writing this post reminded me of a blog I wrote during my Bachelor years – in German – about the benefits and risks related to self-marketing in social networks.) However, before I started blogging and actively using Twitter and LinkedIn myself, my perspective on those integrity-related dilemmas was that of an external observer (“lurker”, I think, is the term…). Now, I have been able to experience (at least some of) them first-hand. Keeping a blog and sharing contents on social media, especially within a work-related context, does raise important and often complex questions. In this post, I will share some of the concerns and unanswered questions I have regarding my use of social media as part of my professional role.

  • What contents can I write about and / or share?
    From a professional perspective, this question encompasses several different topics. First of all, should I restrict my writing / sharing practices to contents related to my research field and areas of professional activity (teaching, project management etc.), or can I also write about and share articles that fall outside of this scope? Another, related question is the matter of opinion sharing. Am I free to express or hint to the opinions and political stances I have as a private person in social media accounts used in a work-related context? If so, are there boundaries I should be careful not to cross, particularly value-loaded and controversial topics I should refrain from addressing when talking as my “academic persona”? At a more practical level, do I need to add disclaimers to my social media accounts, as some recommend doing [1] [2], in order to dissociate my employer from the contents I write and share?
  • How often should I share? How selective should I be in my sharing?
    This is mainly a question of social media etiquette, but it is of course important to consider since both the contents one shares on social media and the frequency with which one does it will affect others’ perception of who we are and what we stand for. Should I share whatever I find interesting – regardless of topic, time of the day, previously shared content, or should I work on developing a strategy, defining rules about what, when and from whom to share? The question of authenticity comes into play here, as calculation and authenticity often are on opposite sides of the spectrum. Is it all right to leave room for genuineness and spontaneity, or is it taking too big a risk considering the persistence [3] of social media content and the potential negative impact of thoughtless click could have for my professional future?
  • Should I accept students into my social network?
    This is an issue I have encountered on LinkedIn and for which I have not yet found a definitive solution I am fully comfortable with. What should I do when students send me a contact request? On the one hand, they are, in some way, indubitably part of my professional environment. On the other hand, they are my students, which makes being – even virtually – connected to them outside of a teaching context seem strange and rather inappropriate to me. Am I living “in the past”, attached to considerations which have progressively become groundless in the hyper-connected world in which we now evolve, or is it (still) wise and reasonable to establish clear boundaries between peers and students?

What is your take on those questions and issues? Are there concerns I have overlooked? What strategies do you have in order to preserve your integrity in social media?

References:
[1] Carrigan, M. (2016). Social Media for Academics. London: Sage Publications Ltd.
[2] Hank, C. (2012). Blogging your academic self: the what, the why and the how long? In D. Rasmussen Neal (Ed.), Social Media for Academics: a practical guide. Oxford: Chandos Publishing.
[3] Boyd, D. (2014). It’s complicated: The social lives of networked teens. New Haven: Yale University Press.

Lesson learned from a crash course in Project Management: the crucial role of communication

Poster showcasing my takeaway message from a crash course in Project Management: Communication at the heart of Project Management

As I mentioned in an earlier post, I took a crash course in Project Management earlier in the fall. Although it was a very short course spanning over only three weeks, I found it very helpful. What I most appreciated about the set-up of the class was that one of the two mandatory assignments was to interview a practiced project manager at Uppsala University. (My team partner and I had the chance of talking to Titti Ekegren, a project coordinator within the strategic innovation program EIT Health, involved in setting up courses to help students bring their research out into the industry.) The combination of compact, to-the-point lectures with this hour-long exchange with an experienced practitioner was, from my perspective, a nice way to “connect the dots”. Now, what dots are we talking about? Let me tell you.

A few weeks ago, the final session of the course approaching, I found myself trying to formulate the key messages I would be taking away from the course. On the one side, I had my notes from the few lectures we had had, and on the other, my notes from the interview we had conducted with Titti. I was expected to create a poster showcasing my lessons learned from the course, but found myself strangely at a loss for ideas – despite the strong feeling that there was something really important – or at least, something I felt was really important for me – I had learnt from the class. I started writing down the central project management concepts that were recurring in my notes – stakeholders, value, goals, plan(ning), assessment. Then, I tried to connect those different concepts in a way that made sense to me, drawing arrows going in different directions between them. This made me realize that I had forgotten to include the project manager in my diagram, so I added her, and then the project team. After drawing a few more arrows here and there, connecting the different key project management concepts I had included to the project manager and the project team, it finally struck me: project management is all about communication.

It is about building, maintaining and developing a common understanding of the project among all parties involved, and maybe more particularly between project manager and project team members. A continuous, constructive dialogue must take place between the project manager and the project team, starting from the definition of the project value and goals over the elaboration of the project plan and the identification of the risks associated with the project to the assessment of its progress. From this perspective, the formal project documents and milestones recommended in project management books merely provide support for this fundamental project component, but do not have any significant value in itself. Having created a state-of-the-art project plan without having taken into consideration the perspective of the (other) project team members is meaningless, and such an approach is doomed to failure.

If this realization of the crucial role of good communication in project management was such an epiphany for me, it is, of course, because I have been extremely bad at it so far. Suddenly, I understood the cause of the small-scale, but nonetheless frustrating and hindering frictions I had experienced with some of my colleagues – not only had I not told them all there was to know about my vision for the projects, but I had not actually listened to their perspective either. I had not asked them what they wanted to get out of the project, what they felt made it valuable or how much they were able to get involved in it. As such, the projects had remained mine, instead of becoming ours.

But let’s now think a little bit further. If (effective) communication is at the heart of project management, it means that a good project manager needs to be, above all, a skilled communicator. One that not only shares her perspective with her team members, but also one that explicitly invites them to share their own vision and ideas for the project, and actively listens to what they tell her in order to (re-)shape the project components. Lesson learned.

Using social media as a [newbie] academic – Part 2: the challenge of “finding the time”

Last week, Åsa Cajander and I held a seminar on “Social Media to Promote Research and Impact Society” for some of the researchers within NORDWIT – the Nordic Centre of Excellence (NCoE) on Women in Technology-Driven Careers. The reading and thinking I did in preparation for the seminar inspired a first blog post, published last week, on the benefits I see in using social media as a researcher. In this post, I will be looking at the other side of the picture and discuss one of the main challenges I have encountered in relation to the use social media, in particular with blogging and Twitter: finding the time to produce and consume social media contents.

I mentioned in my last post that one of my main motivations for starting a blog – and one of the main benefits of blogging according to many academics – was to get me to making a habit out of writing. However, although my blog has pushed me to write more than what I would have done without it, I have on multiple occasions decided to let writing the next post slide. The main reason for this was that I felt I just could not afford to dedicate time to this activity – even if skipping it meant I was falling short of my goal (publishing a new post at least once a week). This too-bad-but-you-have-more-important-things-to-do attitude has not only been directed toward blogging, but also to any activity on social media – be it blogging, tweeting, or reading social media contents. (For instance, only a few weeks after creating my Twitter account, I disabled the app’s notifications on all my devices and refrained from opening it altogether, overwhelmed by the additional flow of information.) As a result, I have of course not been blogging as much as I had planned (though my list of topics to address on the blog is getting longer every day!) and have been unable to keep up with what the people I am following on social media have been publishing. This in turn has led me to feel not only constantly stressed out and guilty over not reaching my goals, but also generally dissatisfied with my work performance.

Talking about this problem with others, and reading an interesting book chapter about it (in Mark Carrigan’s Social Media for Academics, which I can only recommend), made me realize that I had simply overlooked the need to develop a strategy regarding social media use. If social media – blogging, tweeting, as well as reading and sharing others’ contents – are to be an integral part of my working life, I need to explicitly make space for it, recognizing that those activities can and should be prioritized when needed. Until now, though I recognized to some degree that using social media could be beneficial to my working life, I still did not allow myself to put it on equal footing with my other “secondary” work-related tasks – answering e-mails, reading research-related books and articles, going to research seminars, etc. However, writing my previous post has made me want to change that, and to give blogging, tweeting, as well as consuming relevant social media contents dedicated time slot(s) on my schedule.

In my next post, I will address another challenge I see related to the use of social media: maintaining one’s integrity.

Using social media as a [newbie] academic – Part 1: the benefits

Tomorrow, Åsa Cajander and I will be holding a seminar on “Social Media to Promote Research and Impact Society” for some of the researchers within NORDWIT – the Nordic Centre of Excellence (NCoE) on Women in Technology-Driven Careers. I volunteered to join Åsa on this seminar not because I see myself as an expert in how to use social media, but rather because I felt it would be a good opportunity for me to dig some more into the best practice related to the topic. Preparing for the seminar brought me to reflect on how I have been handling social media since I started my PhD studies – considering both what using social media has brought me so far as well as the difficulties and dilemmas it has confronted me with. In this post, I will focus on the former, while the latter will be the object of a next post.

Some facts about my use of social media

For a long time, my job-related use of social media was limited to Facebook and an obsolete LinkedIn profile. Shortly after starting my PhD however, I decided to turn my personal webpage into a blog, a development that was quickly followed by the creation of a personal Twitter account. Those steps were inspired by seeing how some of my colleagues, like for example Åsa Cajander (@AsaC), Christiane Grünloh (@c_gruenloh) and Jonas Moll (@Jonas_Moll), used social media to promote their research, engage in discussions with researchers and practitioners from all around the world as well as stay updated about what was happening in their respective community.

What I enjoy about social media

  • I have been writing (despite not having been publishing)
  • One of my main motivations for starting a blog was the conviction that it would help me become a better and faster writer by giving me a constant reason for practicing writing. (In fact, this specific benefit of writing a blog seems to be one of the most frequently mentioned in the articles I have read on the topic – Pat Thomon’s post being maybe the most extensive example I can think of.) Despite it being a bit too early to judge whether I have become a better writer since starting my blog – which, after all, is really not that old – and although I regretfully have to say that my writing speed has not improved so far (more on that later), I have definitely been enjoying using my blog as a writing platform. Had I not had it, I would probably have written only a tenth of what I have produced so far – simply because I have not yet come to the point where I can start publishing on my research.

  • Blogging gives me a feeling of achievement
  • Now, you might be wondering what simply “writing” has brought me in itself – would it really be for the worse had I not done it? Well… yes, I certainly think so. I think that the main benefit has been that I have been “forced” to think through the different topics I have written about. Instead of being content with vague and unchallenged ideas and arguments, I have had to reflect thoroughly on my thoughts and assumptions, structure them into a coherent whole and formulate them in a (hopefully) agreeable style. This is a time-consuming, challenging and often frustrating process, but the outcome – a “Aha experience” of some kind, with the feeling that you have, in some way, taken a step forward, is extremely rewarding. There is definitely, on my part, a sense of achievement that comes with each of my post, the feeling that I have challenged myself – and the satisfaction of being able to sharing the result with others.

  • I have received support… and food for thought
  • Of course, sharing one’s thoughts with others is at the core of social media. After all, if I just wanted to practice writing, I could do it privately – but I chose to do it publicly, on a blog everybody has access to (and for which I actively “advertise” on other social media!). For me, this public sharing of my posts have had the two main following benefits:

    I have received support and advice from my community. Colleagues and fellow researchers have reacted to some of my posts (interestingly, those exchanges have mostly taken place on Twitter rather than on the blog itself), and provided me with advice and support.

    Such a nice compliment from my supervisor Åsa Cajander in response to my struggle with delegating project work
    A good piece of advice from my colleague Christiane Grünloh in response to my difficulties in documenting my daily progress, accompanied by a humorous touch.

    I have initiated discussions / debates. Some of my posts have led to other people sharing their nuanced opinion on the addressed topic (either through Twitter, their own blog, or via private communication), pushing me to think beyond my own perspective, and making me aware of aspects I had not considered.

    Jelle van Dijk, an Assistant Professor at the University of Twente, reacted to my post about why users should not be blamed for struggling with computerized systems with a post of his own, addressing aspects I had not considered.

Grappling with the pain of delegating

My answer to “How do you see yourself as a project leader?”

Those past few weeks, I have been taking a short crash course in project management. Two weeks ago, after having talked risk assessment for most of the lecture, our teacher suddenly handed us some white paper and color pens and asked us to draw “how we saw ourselves as leaders”. 10 minutes later, as I was explaining to my neighbor the idea behind my scribble (shown above, just for the fun of it), his first reaction was “Well, you really must be terrible at delegating”. I really had not been thinking in that direction at all when producing my drawing, but as soon as he said it, I realized he really had put the finger on one of my biggest issues as a project leader.

If asked, I will of course tell you that delegating is crucial in a team project, since allocating all tasks to yourself is just not sustainable. In addition, it is also a waste of resources, since it is highly probable that some other team member(s) are more suited to executing certain tasks than yourself. As such, taking on those tasks means that they will be performed less well, or at higher costs – say, you might for example need to read up on how to proceed, or just need more time to accomplish the tasks since you are less experienced with the activities they require. Delegating is thus both a way to ensure that every team member gets a reasonable workload and a manner to make the project benefit from everybody’s respective sets of skills, potentially maximizing the quality of the project outcome.

That being said, I must confess that delegating almost always throws me into an emotional turmoil. Why? (1) The control freak inside me needs to ensure that things are done in a specific way and at a specific time. (Also, by a specific person: yours truly.) I cannot think about any rational argument that would give this train of thought any validity, but still. (2) I get this nagging feeling that I just need to be involved in everything in order to be worthy of my peers’ respect (#inferiority complex, and maybe #impostorsyndrome, though I guess you need to have quite a high opinion of yourself to think that your “I-am-not-good-enough” thoughts are due to something else than your not-being-good-enough). In my opinion, this already is a more rationally understandable issue. After all, the academic environment is highly competitive, and there is a fine line between your cleverly managing where you put your energy and your just being lazy. Also – doing less means being less visible, which can quickly turn itself against you in that kind of competitive setting. (3) I want to learn! How can I improve and grow if I always delegate the tasks I am less good at to other people? The opposite (4) is similarly problematic: Why should I delegate a task I know I can do well (maybe even best?) and will enjoy doing?

I have thus come up with three reasonable (and one rather unreasonable) reasons not to delegate – though this unfortunately hardly brings me closer to solving my delegating dilemma. Obviously, there must be a way to balance those different “interests”, namely (2) establishing yourself in your research group / field, (3) developing your skills set and (4) (still) doing (some of) the things you like. Any thoughts? Have encountered the same dilemmas as I when delegating tasks? What strategies do you use to help you delegate within your project team(s)? (You do not need to write a three-page essay, but I sure could use some help…)

The struggle of finding the right words

A recurring challenge since the beginning of my PhD studies has been finding the right words to describe my research domain and interests. It seemed straightforward enough in the beginning, but quickly proved trickier than I had anticipated.

The first problem was that, as soon as I tried to operationalize what I was working with, a multitude of clarifying questions popped into mind, giving me pause. I realized I needed to be more specific, but that I actually did not understand my research focus well enough to be able to do so (after all, you only can explain what you truly understand, right?). Though I had thought that I had a good grasp on my research topic, this failed attempt at putting my research questions into words made me aware of a (seemingly endless) list of questions I still had to find an answer to before going further.

(Funnily enough, I came across this episode of David McRaney’s “You are not so smart” podcast a short time later. It explains how, although we may feel we understand in depth certain things – for example, how a bicycle works – we really do not. This apparently is called “the illusion of knowledge”. The experiment at the core of this finding revolved around random people being asked how familiar they were with bicycles, and then being requested to draw an actual bike. It turned out that most of the people having presented themselves as being familiar with bicycles were unable to draw a functional bike. I could not help drawing a parallel between this outcome and my struggle with describing my research topic.)

A second problem I was unaware of a few months ago is that in research, almost all terms you can think of have a certain meaning within one field, and a very different connotation within another domain. Of course, a Bachelor in Information Science behind me, I already knew that you just have to define all the key terms you use in order to avoid problems. But it seems that in science, this particular matter takes on a whole new dimension, as many terms come with their lot of implicit assumptions and values (tell me the words you use, I’ll tell you what kind of researcher you are). Determining which those assumptions are is, as I have been told,  a mandatory step if you do not want to be, at best, extremely embarrassed during your defense (I have heard some real horror stories on the subject), or, in a maybe even worse case scenario, be completely misunderstood by your research community. As one of my colleagues told me some time ago: make sure you can explain the use of every word in your thesis. I’ll get right back to my books.

I wish I were better at… writing

Time to address the elephant in the room: I am not writing. It is not for lack of ideas though – Summer schools, course assignments and studies-in-progress have provided me with more than enough food for thought. But if I have been thinking, I have not been writing.

I know it is bad. I am indeed utterly convinced that there is nothing like writing to help ordering scattered thoughts and develop new ideas. Plus, it is just good scientific practice – as this famous quote from Adam Savage, which one of my colleagues wrote as a dedication in the copy he gave me of his thesis, reminded me: “remember kids, the only difference between screwing around and science is writing it down”. Thinking about it, documenting one’s everyday research-related doings and musings is the piece of advice that I have most frequently heard since starting my PhD studies. I have also seen how some people include extracts from their PhD diary into their thesis, and found that I really enjoy getting this insight into their learning process. But I am still not writing.

The funny thing is that, a few months ago, in one of my post-sleep deprivation motivation highs, I did start writing my very own PhD diary – needless to say, it so far only has the one entry. Despite all my good resolutions, I have found it hard to take a moment to sit and set down on paper the outcome of my days, and that even if I have actually come to some thoughts I feel would be important to record. Those ideas currently end up, at best, on a sticky note on my desk, where they are forgotten until I am forced to tidy up my workspace.

The fact that I do manage to write sticky notes is interesting though – after all, writing on a post-it note or in a diary cannot make much of a difference in terms of effort. It seems that there is only a small step separating the messy piling up of sticky notes and the more ordered, systematic writing of entries in a PhD diary.  This makes me wonder whether collecting and arranging the notes I have written during, say, a week, in my PhD diary could be a viable solution to my problem? Baby steps is the word.

Have you also struggled with documenting your daily / weekly progress? What tricks have been useful in keeping you going? Do you think such a habit is beneficial?

On humans, computers and why users should not be blamed for struggling with computerized systems

Last week, I was in Dublin for the first week of an EIT Health Summer school (the second week will take place in Stockholm later in August). The event brought together PhD students and researchers from all over the world and from within a variety of different disciplines, among others psychology, software engineering, human factors, human-computer interaction as well as nursing. Looking back at those few, eventful days, I feel that one of the most enriching aspects of this experience was, for me, the “confrontation” with viewpoints on humans and technology that were sometimes very different from my own. This led me to reflect over what exactly my beliefs and values were when it comes to humans and their use of technology. In this post, I thus want share my view on the topic, explaining how I look at computers and humans, and what I believe this implies for designers of computerized systems such as myself.

Fundamentally, I see computers as powerful tools crafted by humans to support other humans in their tasks and, to some extent, “enhance” their abilities. Although I am aware of and respect the capabilities that have been brought into computers over time, I have a very high esteem for humans’ cognitive abilities and believe that, despite our so-called “limitations”, we are “superior” to computers. It feels a bit strange to be writing such a sentence, but I had the impression last week that some fields regard humans as the “weak link in the chain” when it comes to the interplay between humans and computers, suggesting that the former are to blame when something goes wrong – a perspective I fully and completely disagree with.

My main argument is that computers are not an independent entity existing in the world, but rather, as I wrote above, a human creation: we have the full power over their functioning and appearance. This means that if computers are ill suited to our needs or are not made to fit our characteristics (or “limitations” as we are used to calling them) as human beings, we can only blame our design of such technology – but certainly not those who use it (sometimes even against their will). I find it strange that we have no difficulty in recognizing this state of things when it comes to physical tools, but not when it comes to computers: if you were given shears to cut a piece of paper, would you blame yourself for not being able to do it well? Of course not! You would rightfully throw away the shears and request a more appropriate tool – for example scissors. Still looking at scissors, we realize that there are many different types of scissors, depending on who is meant to use them (adults, children, left-handed people, right-handed people etc.) and what they are meant to be used for (cutting paper, nails, hair etc.). After all, we cannot change the way we, humans, are built and function and, though our goals and activities do change and evolve to some extent, we cannot really modify them either (our basic needs do not change). What we can do however, is adapt our existing tools and create new ones in order to enable us to carry out what we need with (and sometimes despite of) the characteristics and abilities we possess. We seem to be able to do it quite well for physical tools, so why should we not be prepared to do the same when it comes to computers?

What I want to get at is that we designers cannot take ourselves out of the equation when assessing the interaction between humans and computers. We need to accept that if our fellow humans are not able to perform their tasks well using the computerized systems we have designed, we are the ones to blame – not they. Putting the blame on our users is a fallacy because they are not different from us, or rather, we are not different from them: we have the same limitations, and should not consider ourselves superior, or in any way more able, than the people using our systems. Instead of wondering “How can they not get it?”, we should ask ourselves “What did I not get?”. The computerized systems we design, despite being better than we are at certain tasks, in the end simply are the reflection of our understanding of our users’ characteristics, situation and needs. The fact is that this understanding is often fragmented, incomplete or inaccurate, which is why design is such a complex, challenging and exciting art.

(Failing to) bridge the gap between research and practice: the case of HCI

I am currently in Dublin on an EIT Health Summer school (my first Summer school ever!). This morning, it was Jan Gulliksen’s turn to hold a presentation for us, and what he said strongly resonated with me.

The topic of the presentation was “Why are you doing this?” – the “this” being, in this case, our PhD research. We actually started off by answering this question using Mentimeter. Three different options were available: saving the world, contributing to the creation of new knowledge and getting a PhD. As it turned out, a majority of the people present (about half) chose the creation of new knowledge as their main motivation; the remaining respondents were quite evenly distributed between “saving the world” and “getting a PhD”, though the former had the advantage (I was among those – could you tell?). Jan then moved on to talking about how we have not really managed to make the most of the possibilities offered by digitalization so far, especially within higher education. His point was that apart from the blackboard getting replaced by PowerPoint presentations, the way in which we are teaching students is not fundamentally different than what it was in the Antiquity. He made a similar observation in regard with the use of IT at work, showing side-to-side one picture of a work space taken about 25 years and one made recently (those computer screens certainly have slimmed down!).

This fixity in our way to teach and carry out of work tasks was suggested to be, at least in part, the result of our incapacity to translate research findings into concrete, significant change in the “real” world and thus to foster innovation through our research. (Interestingly, the numbers suggest that this problem is particuarly present in Europe, while it is much less of an issue in the US.)

If the mention of this problematic resonated so much with me, it is because as an HCI pracitioner, I am almost daily reminded of how little our research is being applied in practice: I encounter the same design faults over and over again in the computerized systems I use in my everyday life. For example, I recently had to contact the customer care center of an airline company because they had not sent to me any receipt for a seat reservation payment (how do you not automatically send a receipt?). On my way to Dublin from Uppsala last weekend, I needed four tries to buy my train ticket to the airport (why did I have to scan my card twice throughout the process, and why did the system close down when I withdrew my card too quickly?). After buying a film on a video-on-demand website a few days ago, I was made to click my way through another section of the site in order to stream the film I had just purchased (why couldn’t it start playing right away?).

Of course, the issues I have given as examples above are not big issues – but I would argue that it serves my point even more. Those design bugs could very easily be avoided or fixed. No extensive design experience or outstanding programming skills are required; it would just require some basic user-centered thinking. After all, the necessary knowledge (for example in the form of design heuristics) and techniques (such as quick-and-dirty usability testing methods) are readily available. They just need to be applied. So why aren’t they?

Based on my (limited) experience so far, part of the answer appears to be the limited outreach of the HCI perspective: outside of relatively few HCI practitioners and advocates, user-centered design principles do not seem to be resorted to in system development and optimization processes. During my Master’s studies, I worked with computer science students who simply discarded all my suggestions because the existing solution “fulfilled the requirements” (understand: offered the agreed upon functionality, even though not in a way that was optimal from a user perspective). A few months ago, I tried to convince a head of department to perform changes to the work system in use at his company in order to create a system flow that would fit better his employees’ actual workflow and work context- in vain.

Why is it so hard for HCI to have a real, large-scale impact on the systems that are being produced – both in the industry and within academia (because let’s face it, university websites and other university resources usually are good examples of what not to do)? How do you think we could bring this situation to evolve?

If you are an HCI practitioner yourself, have encountered similar difficulties as I in convincing decision-makers and stakeholders to implement your design-related suggestions for improvement? How have you gone about to try and “get your way” in spite of this resistance?