Lessons learned from writing my first academic paper

A couple of weeks ago, I completed my very first academic paper. Although I was quite anxious prior to getting started, the process unfolded surprisingly smoothly. I must say that I was lucky to benefit from the experience of my supervisors and colleagues, who were, as always, a big help. In this post, I thought I would share with you what worked well in this process of writing my first paper. I hope that those tips can be of use to you when writing your next article.

  • Using LaTex instead of Word.

    I have lost count of how many people I have heard tell stories about how enormously LaTex had facilitated their writing process – “enabling them to focus on the content instead of on the form”. Although I knew I would have to spend some extra time familiarizing myself with how it worked, those stories had me convinced that it was a worthwhile effort. (Of course, it absolutely was!)

    As I knew that some of my colleagues used and seemed happy with Overleaf, a collaborative LaTex editor, I chose it as my writing tool. It took me about four hours the first day to manage to set everything up and sort out the error warnings I was getting. Fortunately, I could easily find the solution to each of my issues through a quick Google search. Once these initial preparations were done, LaTex / Overleaf held all its promises: I could de facto write my whole text without worrying about the format. I could furthermore see the latest fully formatted version of my text at all times, which was particularly helpful in my specific case as the number of pages was strictly limited. Another element I highly valued what how easily it was to add and modify my bibliographical references – I never had to give them a second thought. Last but not least, getting my perfectly formatted PDF paper was always just one click away. This was very convenient both when it was time to submit the final version and when I wanted to get some feedback on my work-in-progress during the writing process.

  • Getting feedback from several people with different backgrounds.

    In addition to my main supervisor, I was able to ask my two ‘secondary’ supervisors and a colleague to give me feedback at different stages of the writing process. Without my providing them with any kind of instruction on the kind of feedback I needed, each one of them naturally focused on a different aspect of the paper: the overall structure of the paper, the detailed structure of the different sections, the presented concept, the formulation of my contribution… At the end of the process, I felt that I had been able to work in-depth on each of these essential aspects of the paper. This would certainly not have been the case had I shown my text to only one person.

  • Starting the submission process early.

    This tip came from my main supervisor, and I was really happy to have followed it when came the time to submit my paper (which happened to be on the very day of the submission deadline). In this particular case, submitting the paper required filling in a form with different kinds of information, such as author, affiliation, title, keywords, etc. This form could be saved, and then returned to when it was time to make the final submission. I thus took the time to register and fill in most of the form about one week before the deadline. Then, when the day of the submission arrived, I only had to upload the final PDF version of my paper and press “Save”. No stress! The only drawback with this method is that one could forget to update the information in the form when submitting the final version. As such, I would recommend to always quickly go through the content of the submission form before uploading any file.

What are your tips when it comes to writing and submitting papers? Can you relate to the ones presented above?

Why I believe that my teaching duties make me a better researcher

Those past three weeks have been almost exclusively dedicated to my teaching duties. In parallel to taking two different courses organized by the didactics department at Uppsala University (an academic teacher training course and a class on supervising students), I have been working on setting up two new courses that will be launched in a few weeks.  One of them, “Complex IT in large organizations”, is led by my supervisor Åsa Cajander and will be held for the very first time (I have written about it in the blog of my research group). The other, an online introductory course in HCI, has gone through a thorough restructuration, and it will be the new design’s first iteration (fingers crossed!).

Creating new courses (or making new courses out of old ones) is one of my favorite activities as a PhD student. However, looking around me, I realize that the people sharing this preference are rather few. Often, the reason for this is that teaching duties are experienced as being an impediment to doing research: the time dedicated to teaching is seen as time “stolen” from what actually counts – yes, research. I do not share this negative perception of teaching.

Obviously, I do not mean that teaching does not take time, because it does. Actually, in my experience, high quality teaching always requires more time than what is being allocated to teaching staff. (Of course, this perception also comes from the fact that I am an inexperienced teacher, and that most of my teaching work is made of first-time events.) In any case, I have found that dedicating those extra hours to the task is what makes teaching a rewarding endeavor. As such, I do not mind spending several additional hours away from my research in order to make sure that my teaching sessions will be a meaningful experience for both my students and me.

Most importantly, I have come to see that teaching and researching are not mutually exclusive. I do not stop being a researcher when I teach, and I do not stop being a teacher when I am working on my research. On the contrary, I can use the one to “feed” the other – and vice versa. For example, as I was looking for additional course literature for my students in the HCI online course, I stumbled upon a resource that was extremely relevant to my research topic. Likewise, as I was pondering the implications of an article in the context of my research, I became inspired to review the design of the course (then still in progress) in order to incorporate the new perspective I had just learnt about. This enabled to understand better what the approach was about in addition to improving on my original course idea.

In the classroom, I also like spreading the word about my research – I openly confess to attempting to convert my students to user-centered values – and using it to support students in their learning process. Would I enjoy researching without teaching duties? I think I would. But there is so much I would miss, overlook or simply not reflect about if I didn’t have to teach that I am actually grateful I have to.

What is your experience of teaching within academia?

Celebrating one year as a PhD student!

Today, it has been exactly one year since I started my PhD studies. Without surprise, the past 12 months have gone by in a blur. Looking at my list of publications, however, one might wonder what I have actually been doing during that time – as (spoiler alert) it has not been writing articles.

Well, thinking about it, this year has been mostly about learning. I have slowly been getting a better grasp on (to me, at least) fuzzy concepts such as research and academia. Now, I understand a bit more – at least enough to be aware of everything I still do not know and have not yet understood. (Yes, lifelong learning will be needed indeed. Not that I doubted it before, but it has somehow become much more concrete throughout the past months.) In this post, I would like to share with you what I feel I have learned about being a researcher throughout this first year.

  • Research is a way of thinking.

I started my thesis with a very vague idea of what “true” research was. I heard everybody talk about, refer to it, discuss its “validity”, but I really did not have clue as to what it was about. What made a question interesting from a research / scientific perspective? What differentiated a research project from, say, an organizational development project? This first year has provided me with the beginning of an answer. Listening to my colleagues, reviewing papers and doctoral theses together with them, reading scientific articles and working on the development of my own research project, I have come to understand a little bit better how researchers think. This has led me to change my “mental model” of research, from a kind of technical project-based method to a way thinking, of questioning things, of looking at the world.

  • Research is a continuous process. With a lot of unforeseen turns.

At the beginning of my thesis, I used to see the research process as rather straightforward: development of a research question, elaboration of a suitable methodological approach, data collection, analysis of the results etc. I did not understand how you could “get lost” in the middle of the way, ending up having a method and results that did not answer the original research question (a phenomenon that I observed in several scientific papers). Couldn’t you start with a well thought-out research plan and just stick to it? But I have come to realize that as you slowly discover more and more about your research topic, your perspective changes. What first seemed to be a fantastic idea might seem obsolete and terribly limited a few weeks later; you might find an article disproving certain aspects of your method or discover a different theory or method that seems more promising, among other things. And then there are, of course, practical limitations and obstacles from the field that can get in the way (not enough participants, original set-up not implementable, etc.), leading to your not performing your study as you had planned Things just do not always go as planned – not least your own understanding of your topic.

  • Research is not only about “making”. It is also about sharing and listening.

There is a very social aspect to research, which I did not suspect before I started my PhD studies. Exchanges with peers and researchers that are more experienced are a real motor for reflecting over and “feeding” one’s research. Beyond attending conferences, which I have not yet a big experience of, taking part in informal discussions with colleagues, attending seminars and defenses and even more actively asking my peers for advice have been huge contributors to the main “breakthroughs” in the development of my thesis so far.

  • Part of being an academic is juggling tasks. Constantly.

You have already heard me complain about my struggle to manage my time – the fact is, working within academia implies working on many different kinds of projects at the same time, continuously setting new priorities, and constantly switching from one project to the next. The picture of the isolated, desk-bound scientist who spends his whole day thinking about his research could not be further from the truth – within my research area, at least. Many of my daily activities are social and not directly related to my research (though, fortunately, the discovery of unexpected “bridges” leading back to one’s research is frequent).

Preparing and moderating workshops: a few (hard-learned) tips

For my first field study, I have been conducting workshops at the Uppsala University Hospital with nurses and assistant nurses. My experience with workshops prior to starting my PhD studies was very limited, and so this has mostly been a “learning by doing” kind of process for me, where I have tried to get a little bit better every time. In this post, I would like to share with you a few tips that I have personally found very effective in helping me in my role as workshop moderator.

  • Writing a script: once I have defined the concept of the workshop, I write a detailed script covering each of its “moments”. For example, I write down how I will greet participants and introduce the study to them, and I list the different questions I am going to ask them. This forces me to really think through my workshop design, enables me to identify weaknesses in the set-up and helps me formulate better prompts as well as foresee follow-up questions (and how to answer them in the best possible way). As it requires me to visualize the whole event from beginning to end, writing a script also enables me to feel more relaxed during the real-life workshop – just because it makes me feel prepared and in control of the situation.
  • Rehearsing: as for an interview, where you learn your interview schedule by heart in order to allow for a more fluid discussion, rehearsing before a workshop aims at making it easier to moderate the discussion in diminishing the need to fully improvise. If you are well prepared and have been experimenting how to answer to different questions and situations that could arise, it is easier to adapt to unexpected events, because you have a bigger “pool” of prepared reactions that you know are appropriate. Rehearsing is also another great way to counter initial nervousness when getting started in a new content.
  • Making a checklist: a workshop typically requires a diversity of material: you need recording devices (always at least two, to be on the safe side), pens and paper for the participants, information sheets and consent forms, possibly name tags, a notepad, a clock etc. In my experience, this makes it difficult not to forget anything. Making a checklist of everything I need to bring with me to the workshop – both for the participants and for myself as moderator, has been very effective in preventing me from forgetting important elements in my office and omitting to get items out of my bag when preparing for the workshop on site (I generally always forget the notepad!).
  • Using cue cards: reading a long text during a presentation is not a good idea, and so is reading from a script during a workshop. However, cue cards work great for me. I generally write one (one-sided) cue card for every workshop moment. Something that I can recommend is to include the exact timestamp at which you expect each moment to start and end (instead of having, for example, the expected duration of each moment in minutes). This makes it easier to check whether you are on track during the workshop, without requiring you to switch focus from the conversation.
  • Learning participants’ names: I find it extremely hard to moderate a discussion if I do not know the participants’ names. It often happens that you want to ask a follow-up question to a participant, or invite a participant to speak. Another frequent need is to refer to what a person has said when summarizing what you have understood from a discussed topic, in which case knowing that person’s name is a must (I personally find using “you” and pointing very awkward when I have to do it). I also think that using your participant’s names creates a more sympathetic atmosphere. However, using names also means that the recording from the discussion is not anonymous, which can be a problem in some situations.

What do you think of those tricks? Do you have other tips when it comes to preparing and conducting workshops?

The transcription rollercoaster (1/2)

I am currently in the midst of an intense data collection phase and have been conducting 90-minute focus groups with nurses and assistant nurses (5-6 at a time) from different departments and with different specializations (i.e. ward and surgery nurses) at the Uppsala University Hospital. (I am still pinching myself about our having been able to gather such large groups of participants since having nursing staff dedicating 90 minutes of their time to a study is very difficult.) Designing, planning and performing the focus groups has been a very exciting and positive experience so far. Right now however, I am experiencing the maybe least fun side of data collection: transcription.

I had heard that transcribing was a very time-consuming activity, of course, and had made approximate calculations of the time I would need to transcribe each focus group. Unfortunately, my calculations happened to be way, way off. (In case you are wondering: according to my latest estimations, I have needed about one hour per 5 minute of recording!). It is rather amazing the amount of things one has time to say in only a few seconds… It is actually quite funny (in retrospect, I did not find it funny while doing it, as you can imagine) – it feels like you have been transcribing forever, and then you look up and see that you actually only have gotten 2 minutes further since the last time you checked!

But transcription is not only time-consuming, it is also much more difficult than what I expected. (Of course, I have been transcribing in Swedish, which I have only been speaking somewhat fluently for the past year and half or so, but I do believe that some of the difficulties I have encountered are inherent to the task and not entirely dependent on the degree of familiarity of the transcriber with the language spoken.)

Here are some of the difficulties I have been experiencing:

  • Not understanding what is being said: it can be because several participants talk at the same time, talk too fast or simply mumble – in any case, the result is that no matter how many times I listen to a segment and how much I slow down the pace of the recording, I simply do not grasp what is being said. As a result, “data retention” is definitely not 100% percent (as one could maybe have thought because of the use of the recording device). Part of the data does get “lost” in the transcription process.
  • Not recognizing who is speaking: this is a problem I had not at all anticipated, but it definitely is a big one. Voices sound quite different on the recording (it gets even worse if you are slowing down the pace of the recording when transcribing), and when you have been talking to 5-6 people you had never met (and heard!) before, recognizing who is speaking when simply is impossible. I used volume as an indication – I knew that those who I heard most loudly were those sitting closest to me and the recording device – but that is of course not fully reliable. Fortunately, accurately recognizing who was speaking was not really needed for the analysis of the data.
  • Finding the appropriate level of detail: it is up to the transcriber to determine how faithful and detailed the transcript should be. If you are doing, say, a discourse analysis, I guess you need to have every word in precisely the order in which they are spoken (I was told that some researchers even count the number of seconds of pauses in the conversation). Luckily, this was not the case for me. Although, I wrote everything in detail at first – every hesitation, every start to a sentence, every nodding sound – I then realized that this level of detail was unnecessary for the kind of analysis I was planning to undertake. I thus started to focus much more of the content of what was being said, skipping hesitations and unfinished sentences (although we do not always notice it when listening to somebody speak, the oral discourse is very fragmented and contains many aborted sentences) as well as words only used orally (like for example, “like” or “ah”). This made for a transcript that was much easier to read and better suited to my needs.

Do you recognize those difficulties? Have you experienced other difficulties that I have not mentioned here? Do you have any tips and tricks for transcribing from audio recordings?

Discovering Dewey

I am currently taking a reading course, led by Professor Kia Höök at KTH (the Royal Institute for Technology in Stockholm), about the concept of experience. The first work on our list was John Dewey’s Art as Experience. I had never heard about this prominent American philosopher (born 1859, dead 1952 at 92!) nor read anything even closely related to esthetic theory before, so as you can imagine, this made for quite a challenging read! Dewey’s style is, to say the least, far from straightforward, and the concepts he attempts to define, describe and analyze in this book are all abstract and complex processes –each sentence requiring careful examination and reflection. Attempting to interpret and make sense of the book contents with others through seminars was very helpful, and I must confess I am not sure I would have managed to stick to reading the whole piece had I not had this support.

However, in spite of those difficulties, I very much enjoyed learning about Dewey’s philosophy, at the core of which lies the concept of experience. Dewey sees (what he calls esthetic) experience as an interaction between an individual and her environment – an interaction characterized by the full engagement of the individual. Throughout such an experience, the individual’s past knowledge and history are combined with the material being interacted with, which eventually leads to this the individual’s “transformation” and growth. For Dewey, having esthetic experiences (in one’s daily life, not least at work) is a vital human need and aspiration. For instance, in Art as Experience, he criticizes the way the standardization and extreme simplification of chain factory work prevents workers from having such “esthetic” experiences – creating an unhealthy working environment.

As my research focuses on the impact of technology on nurses’ work environment, Dewey’s mention of this particular development from the industrial revolution struck a chord. It made me want to look at my research topic through the lens of experience, from the perspective of which several interesting questions arose. For instance, what constitutes what one could call “the nursing experience”? In other words, what kind(s) of experiences are specific to the nursing activity – and how do nurses’ current digital tools support or hinder the occurrence of such experiences? As having an experience has something to do with making sense of an activity, another way to formulate the question could be, what makes nurses’ work, from their own perspective, meaningful? From there, the question becomes whether the use of computerized systems is connected to the feeling that the way these systems are used makes sense from a nursing perspective. Do nurses’ digital tools enable them to give meaning to their daily work and, if not (or not fully), how could we (re)design those tools so as to make their use more meaningful for the nurses using them, and support nurses in having (more) esthetic experiences of their work / at work?

Of course, Dewey’s Art as Experience has not only made me reflect on my research, but also on my own (work) life. It is for example impossible not to search for examples of esthetic experiences in one’s more or less recent past, and to try to understand what made such experiences possible, while reading the book. To conclude, I can say that despite it being a difficult read, it is definitely a rewarding one – even though one read is probably not enough to get a good grasp on everything Dewey is saying.

Lessons learned from a pilot focus group with colleagues

A few months ago, before conducting focus groups with nurses and assistant nurses at the Uppsala University Hospital for my first field study within my PhD – investigating the effects of IT use for patient care management on nurses and their work – my co-author and I set up a pilot study with a few of our colleagues. My goals with this endeavor were to:

  • Check whether we would be getting the kind of data we needed from the discussion (did our questions trigger the kind of answers we were looking for?);
  • Identify “glitches” in the design of the discussion;
  • Get feedback on the study design in order to be able to improve on it;
  • Get to practice my new role as the discussion moderator, which I hoped would lead me to feel more at ease and more “fluent” during the real event.

Of course, we needed to adjust the topic of the discussion to the specific context of the pilot in order for it to be meaningful for our participants. We also shortened the duration of the activity in comparison with the “real” study design (from 90 to 60 minutes), and made sure to include some time at the end to discuss the study design in itself. On the D-day, 3 of our colleagues thus gathered for an hour to discuss recent experiences with some of the IT systems they used at and for work, as well as to reflect on the consequences those experiences had for them and their work.

Now that we have performed 2 focus groups in a “real” setting, here are the key lessons – in terms of what it enabled us to do / see and where it was misleading – I am taking with me about conducting such a pilot focus group with colleagues:

  1. It enabled us to ensure the validity of the study – to make sure that we actually were getting the kind of data we were looking for.
    Although the topic of the discussion was not the same as in the “real” study, we were able to see whether the questions I was asking the participants to discuss led them to mention the kind of aspects we were interested in.
  2. It made it possible for us to identify some unanticipated issues in the flow of the discussion.
    For example, I realized that one particular idea we had had in order to get the discussion rolling was not working well. We were able to come up with something better for the real study – in part also thanks to the feedback we got from my colleagues at the end of the discussion.
  3. It brought to light some divergences in perceptions between my co-author and I.
    My co-author being in charge of taking notes during the discussion (and the notes being made visible to the participants in real-time), I realized for instance that I had expected a different kind of notes from the discussion than what she was writing. Becoming of aware of these differences in expectations enabled us to discuss them openly – and to come to an agreement in regard to how things should be done during the real event.
  4. We received very valuable feedback from our participants / colleagues.
    Having “experienced” our study, our colleagues were able to use their own experience with data collection in order to reflect on it critically and give us constructive feedback as well as suggestions on how we could improve on our design. This for example led us to move from a paper-based note-taking technique to the adoption of a digital note-taking tool for the “real” focus groups.
  5. Work on those follow-up questions – Academics are not (necessarily) representative of the broader population (or the population targeted by the study).
    Although the conclusions we drew from the pilot workshop about the validity and design of our study were mostly supported by our subsequent experiences in the real study setting, something that I had overlooked is that academics (and maybe especially colleagues) should not be considered as representative of the population targeted by the study. Academics are used to analyze and reflect on their practice as well as to put those reflections into words – which might have led to their finding it easier to express their views on the topic at hand. This is not necessarily the case for other populations, which means that getting academics to talk about a topic will not mean that your study participants will be as talkative. Seeing how easily my colleagues had become engaged in the discussion, I expected for the discussion to go as smoothly in the real setting and did not spend enough time thinking of possible follow-up questions to re-launch the discussion, which was a mistake.

5 reasons why PhD students should attend doctoral defenses

Since I started my PhD at the beginning of the year, I have attended four doctoral defenses and one so-called “mid-term seminar” – a presentation of a PhD project halfway through one’s PhD studies. Talking with others, I was surprised to hear that not all PhD students attend doctoral candidates’ defenses. In this post, I would like to name 5 reasons why I believe that attending PhD defenses is a great learning experience.

  1. Getting a feeling for “how it’s done”. Attending defenses demystifies the process: you get to see how a defense is structured, who the main actors are, what kind of topics are addressed, what the atmosphere is like etc. Taking in those different aspects will probably enable you to prepare better for you own defense, but also to feel less stressed out when your turn comes because you will not be jumping into the unknown.
  2. Learning about the state-of-the-art. If you are attending a defense that is relevant to your research area, chances are high that you will get an in-depth insight into the latest and / or most significant developments within your field. Not only does the defending PhD candidate present a piece of research that constitutes in itself a new development, but the discussion with the opponent and the evaluation panel also brings to light relevant previous findings, as well as their implications for the field.
  3. Learning about new methods, and how to apply them. Hearing about how others have gone about to answer their research questions is always inspiring, even if the topic of the presented thesis is very different from yours. In addition to getting ideas for what methods you could use in your own upcoming studies, you also get critical information about the benefits and drawbacks of the presented methods through the discussion between the candidate and the evaluation panel (methodological questions are always part of the discussion, although the extent can, of course, vary).
  4. Establishing a list of “hot questions”. Although it is impossible to predict exactly all kinds of questions you will be asked during your defense, attending multiple defenses enables you to identify recurring topics. For example, I have already mentioned methodology-related questions above; questions about your contribution to your research field and the degree to which your research fits into that particular field’s research “tradition” are, according to my experience, also very common. Knowing those “hot questions” will certainly enable you to prepare your defense better when the time has come, but also gives you a sense of what you need to learn and understand until you get there.
  5. Learning how to make great presentations. Watching good presentations is, in my experience, an easy and great way to get ideas on how you can improve on your own presentations. PhD candidates and / or opponents usually put great care into designing their respective presentation for the defense, which usually leads to those presentations being well-crafted examples of good presentations.

Do you agree with me? Can you think of any additional reasons for PhD students to attend doctoral defenses during their PhD studies?

Dear decision-makers, your employees care

Some of my readings from the past few weeks

This past two weeks, while others have been working hard on their writing in connection with the Academic Writing Month (#AcWriMo), I have been catching up on my reading. A recurrent topic in the books and articles I have read so far is the way IT systems have been used, within the last 15 to 20 years, in order to carry out profound changes in the way work is distributed, structured, monitored and evaluated. Ironically, those changes have led to a significant loss in efficiency, while work-related physical and psychological symptoms of ill-being, both in the workplace and at home, seem to become ever more common.

One of my colleagues, with whom I found myself sharing my surprise at those so openly counterproductive developments, explained to me that these changes were largely due to a certain view of human workers, where money was seen as the single motivation behind employees’ carrying out their tasks as required. In other words, it is assumed that absolute control is needed in order to ensure that workers do their work properly, since they otherwise will do everything they can to get their pay without fulfilling their part of the contract. This really was an eye-opening statement for me, as I realized that some of my first efforts in trying to improve a work-related computerized tool had failed for precisely this very reason. Let me recap.

A while ago, I was offered the possibility to evaluate the usability of a system used to manage different types of documentary resources. Since the interface of the system was about to be changed, my focus was less on the specific characteristics of the current interface than on the structure and contents of the main work processes supported by the system. After observations and interviews with different staff members and a thorough analysis of the results, I had come to the following suggestion for improvement: reducing the number of open fields displayed in the form used to create a new record in the system by taking only the fields most frequently used for each type of resource. This way, employees would not only get an immediate overview of the required information – making the use of the interface more intuitive – but would also be able to fill in the form without unnecessary scrolling and clicking, enabling them to be faster. To me, this seemed to be a very simple and cheap solution to implement, especially in consideration of its very high potential impact on efficiency. However, this proposal was met with a direct veto by one of the main decision-makers behind the system’s design. His argument: the employees would no longer bother to create high quality, “full” records if the possibility of not filling in all the fields was somehow presented as more acceptable to them.

I was stunned. I argued that employees did not have the time to fill in all *theoretically* required fields anyway, since it demanded an amount of time they just were not given. In consideration of their limited resources, they just had to prioritize the information they entered about each document. Most importantly, all employees had shown an incredible dedication in creating records as complete and consistent as possible, in spite of the hurdles the system put in their way when doing so. As such, it seemed very unlikely that a simplification of the form’s display would lead to a loss in quality – the contrary was more probable, since employees would be quicker in filling in “obvious” information, and would thus have more time to pay attention to more unusual, document-specific characteristics. However, in spite of my attempts at convincing my audience, I left the room knowing that my report and the suggestions it contained to improve the system would end up in the paper bin, unread.

At the time, I was not able to put words on what had happened; I could not understand the perspective of that key decision-maker who had flatly refused to listen to my arguments in favor of improving a badly designed, and very inefficient, system. In light of my colleague’s words however, it all started making sense. But let me say this: if people did not want to make a good job, and were not intrinsically motivated to do so, they would not care about bad IT systems. They would not feel bad about the lack of efficiency those bad systems generate – as they are getting paid a fixed amount of money each month, why should they care about “producing” as much as possible? Indeed, if money actually were people’s only drive, employees would be happy doing just average work and quantity would not be of any consequence to them. They would go home satisfied with their workday no matter how many usability-related issues they have encountered, no matter how much time they have spent on struggling with their digital tools. There would be no frustration, no (di-)stress, no burnout. People would probably get bored, but they would find workarounds for that, too. All would be well.

As all is not well, however, a sensible conclusion is that people care. They want to do a good job, and take pride in doing it as best as they can. They get frustrated when the digital tools they use get into their way, and prevent them from reaching their goals. Often, those negative feelings follow them home after work. Increasingly, those negative feelings affect their psychological and physical health.

I wish decision-makers would recognize that, and take steps in order to support and empower their workforce – for example by providing them with flexible IT systems leaving them the freedom to determine how to best carry out their work tasks.

Using social media as a [newbie] academic – Part 3: the dilemma(s) of integrity

I have always been interested in the ethical dilemmas brought about by social media. (Actually, writing this post reminded me of a blog I wrote during my Bachelor years – in German – about the benefits and risks related to self-marketing in social networks.) However, before I started blogging and actively using Twitter and LinkedIn myself, my perspective on those integrity-related dilemmas was that of an external observer (“lurker”, I think, is the term…). Now, I have been able to experience (at least some of) them first-hand. Keeping a blog and sharing contents on social media, especially within a work-related context, does raise important and often complex questions. In this post, I will share some of the concerns and unanswered questions I have regarding my use of social media as part of my professional role.

  • What contents can I write about and / or share?
    From a professional perspective, this question encompasses several different topics. First of all, should I restrict my writing / sharing practices to contents related to my research field and areas of professional activity (teaching, project management etc.), or can I also write about and share articles that fall outside of this scope? Another, related question is the matter of opinion sharing. Am I free to express or hint to the opinions and political stances I have as a private person in social media accounts used in a work-related context? If so, are there boundaries I should be careful not to cross, particularly value-loaded and controversial topics I should refrain from addressing when talking as my “academic persona”? At a more practical level, do I need to add disclaimers to my social media accounts, as some recommend doing [1] [2], in order to dissociate my employer from the contents I write and share?
  • How often should I share? How selective should I be in my sharing?
    This is mainly a question of social media etiquette, but it is of course important to consider since both the contents one shares on social media and the frequency with which one does it will affect others’ perception of who we are and what we stand for. Should I share whatever I find interesting – regardless of topic, time of the day, previously shared content, or should I work on developing a strategy, defining rules about what, when and from whom to share? The question of authenticity comes into play here, as calculation and authenticity often are on opposite sides of the spectrum. Is it all right to leave room for genuineness and spontaneity, or is it taking too big a risk considering the persistence [3] of social media content and the potential negative impact of thoughtless click could have for my professional future?
  • Should I accept students into my social network?
    This is an issue I have encountered on LinkedIn and for which I have not yet found a definitive solution I am fully comfortable with. What should I do when students send me a contact request? On the one hand, they are, in some way, indubitably part of my professional environment. On the other hand, they are my students, which makes being – even virtually – connected to them outside of a teaching context seem strange and rather inappropriate to me. Am I living “in the past”, attached to considerations which have progressively become groundless in the hyper-connected world in which we now evolve, or is it (still) wise and reasonable to establish clear boundaries between peers and students?

What is your take on those questions and issues? Are there concerns I have overlooked? What strategies do you have in order to preserve your integrity in social media?

[1] Carrigan, M. (2016). Social Media for Academics. London: Sage Publications Ltd.
[2] Hank, C. (2012). Blogging your academic self: the what, the why and the how long? In D. Rasmussen Neal (Ed.), Social Media for Academics: a practical guide. Oxford: Chandos Publishing.
[3] Boyd, D. (2014). It’s complicated: The social lives of networked teens. New Haven: Yale University Press.